Test Report: KVM_Linux_containerd 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (11/316)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (297.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0229 17:54:42.039856   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:55:09.725866   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:56:33.750734   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.755978   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.766223   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.786453   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.826723   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.907072   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:34.067487   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:34.388102   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:35.028989   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:36.309235   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:38.869817   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:43.990011   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:54.230982   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:57:14.711337   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: exit status 109 (4m57.071761245s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-180742] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-180742 in cluster ingress-addon-legacy-180742
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664    6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416    6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750    6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:52:39.642457   22516 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:52:39.642724   22516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:52:39.642734   22516 out.go:304] Setting ErrFile to fd 2...
	I0229 17:52:39.642738   22516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:52:39.642926   22516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:52:39.643491   22516 out.go:298] Setting JSON to false
	I0229 17:52:39.644344   22516 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2101,"bootTime":1709227059,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:52:39.644413   22516 start.go:139] virtualization: kvm guest
	I0229 17:52:39.647204   22516 out.go:177] * [ingress-addon-legacy-180742] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:52:39.648548   22516 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:52:39.648555   22516 notify.go:220] Checking for updates...
	I0229 17:52:39.649934   22516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:52:39.651432   22516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:52:39.652779   22516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:52:39.653832   22516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:52:39.654905   22516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:52:39.656166   22516 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:52:39.689817   22516 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 17:52:39.691012   22516 start.go:299] selected driver: kvm2
	I0229 17:52:39.691031   22516 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:52:39.691042   22516 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:52:39.691692   22516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:52:39.691750   22516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:52:39.705501   22516 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:52:39.705563   22516 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:52:39.705771   22516 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 17:52:39.705833   22516 cni.go:84] Creating CNI manager for ""
	I0229 17:52:39.705846   22516 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 17:52:39.705853   22516 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:52:39.705863   22516 start_flags.go:323] config:
	{Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:52:39.705994   22516 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:52:39.707502   22516 out.go:177] * Starting control plane node ingress-addon-legacy-180742 in cluster ingress-addon-legacy-180742
	I0229 17:52:39.708648   22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 17:52:39.861960   22516 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
	I0229 17:52:39.861992   22516 cache.go:56] Caching tarball of preloaded images
	I0229 17:52:39.862129   22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 17:52:39.863836   22516 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 17:52:39.865010   22516 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:52:40.022357   22516 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4?checksum=md5:b585eebe982180189fed21f0bd283cca -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
	I0229 17:53:03.002918   22516 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:53:03.003014   22516 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:53:04.051010   22516 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0229 17:53:04.051318   22516 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json ...
	I0229 17:53:04.051346   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json: {Name:mk35eb9355d8099644c0664e1cfbbd20444a3b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:04.051520   22516 start.go:365] acquiring machines lock for ingress-addon-legacy-180742: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:53:04.051562   22516 start.go:369] acquired machines lock for "ingress-addon-legacy-180742" in 18.476µs
	I0229 17:53:04.051579   22516 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 17:53:04.051661   22516 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 17:53:04.054929   22516 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 17:53:04.055077   22516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:53:04.055103   22516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:53:04.069062   22516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0229 17:53:04.069506   22516 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:53:04.070001   22516 main.go:141] libmachine: Using API Version  1
	I0229 17:53:04.070021   22516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:53:04.070385   22516 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:53:04.070581   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
	I0229 17:53:04.070728   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:04.070850   22516 start.go:159] libmachine.API.Create for "ingress-addon-legacy-180742" (driver="kvm2")
	I0229 17:53:04.070882   22516 client.go:168] LocalClient.Create starting
	I0229 17:53:04.070914   22516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 17:53:04.070950   22516 main.go:141] libmachine: Decoding PEM data...
	I0229 17:53:04.070971   22516 main.go:141] libmachine: Parsing certificate...
	I0229 17:53:04.071025   22516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 17:53:04.071044   22516 main.go:141] libmachine: Decoding PEM data...
	I0229 17:53:04.071055   22516 main.go:141] libmachine: Parsing certificate...
	I0229 17:53:04.071072   22516 main.go:141] libmachine: Running pre-create checks...
	I0229 17:53:04.071081   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .PreCreateCheck
	I0229 17:53:04.071367   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
	I0229 17:53:04.071717   22516 main.go:141] libmachine: Creating machine...
	I0229 17:53:04.071739   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Create
	I0229 17:53:04.071838   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating KVM machine...
	I0229 17:53:04.073025   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found existing default KVM network
	I0229 17:53:04.073659   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.073531   22601 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0229 17:53:04.078465   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | trying to create private KVM network mk-ingress-addon-legacy-180742 192.168.39.0/24...
	I0229 17:53:04.140222   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 ...
	I0229 17:53:04.140266   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | private KVM network mk-ingress-addon-legacy-180742 192.168.39.0/24 created
	I0229 17:53:04.140279   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:53:04.140294   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.140162   22601 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:53:04.140311   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 17:53:04.352605   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.352498   22601 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa...
	I0229 17:53:04.601899   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.601787   22601 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/ingress-addon-legacy-180742.rawdisk...
	I0229 17:53:04.601939   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Writing magic tar header
	I0229 17:53:04.601958   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Writing SSH key tar header
	I0229 17:53:04.601972   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.601899   22601 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 ...
	I0229 17:53:04.601993   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742
	I0229 17:53:04.602030   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 (perms=drwx------)
	I0229 17:53:04.602053   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 17:53:04.602061   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 17:53:04.602069   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 17:53:04.602076   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:53:04.602085   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 17:53:04.602091   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 17:53:04.602100   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins
	I0229 17:53:04.602106   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home
	I0229 17:53:04.602114   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Skipping /home - not owner
	I0229 17:53:04.602125   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 17:53:04.602131   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 17:53:04.602155   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 17:53:04.602184   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating domain...
	I0229 17:53:04.603237   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) define libvirt domain using xml: 
	I0229 17:53:04.603255   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <domain type='kvm'>
	I0229 17:53:04.603262   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <name>ingress-addon-legacy-180742</name>
	I0229 17:53:04.603268   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <memory unit='MiB'>4096</memory>
	I0229 17:53:04.603325   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <vcpu>2</vcpu>
	I0229 17:53:04.603348   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <features>
	I0229 17:53:04.603359   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <acpi/>
	I0229 17:53:04.603364   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <apic/>
	I0229 17:53:04.603369   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <pae/>
	I0229 17:53:04.603374   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     
	I0229 17:53:04.603380   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   </features>
	I0229 17:53:04.603386   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <cpu mode='host-passthrough'>
	I0229 17:53:04.603396   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   
	I0229 17:53:04.603404   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   </cpu>
	I0229 17:53:04.603417   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <os>
	I0229 17:53:04.603425   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <type>hvm</type>
	I0229 17:53:04.603451   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <boot dev='cdrom'/>
	I0229 17:53:04.603466   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <boot dev='hd'/>
	I0229 17:53:04.603474   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <bootmenu enable='no'/>
	I0229 17:53:04.603483   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   </os>
	I0229 17:53:04.603490   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   <devices>
	I0229 17:53:04.603498   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <disk type='file' device='cdrom'>
	I0229 17:53:04.603521   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/boot2docker.iso'/>
	I0229 17:53:04.603535   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <target dev='hdc' bus='scsi'/>
	I0229 17:53:04.603545   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <readonly/>
	I0229 17:53:04.603557   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </disk>
	I0229 17:53:04.603569   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <disk type='file' device='disk'>
	I0229 17:53:04.603594   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 17:53:04.603615   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/ingress-addon-legacy-180742.rawdisk'/>
	I0229 17:53:04.603626   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <target dev='hda' bus='virtio'/>
	I0229 17:53:04.603635   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </disk>
	I0229 17:53:04.603641   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <interface type='network'>
	I0229 17:53:04.603649   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <source network='mk-ingress-addon-legacy-180742'/>
	I0229 17:53:04.603655   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <model type='virtio'/>
	I0229 17:53:04.603665   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </interface>
	I0229 17:53:04.603672   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <interface type='network'>
	I0229 17:53:04.603679   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <source network='default'/>
	I0229 17:53:04.603686   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <model type='virtio'/>
	I0229 17:53:04.603694   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </interface>
	I0229 17:53:04.603700   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <serial type='pty'>
	I0229 17:53:04.603711   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <target port='0'/>
	I0229 17:53:04.603719   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </serial>
	I0229 17:53:04.603724   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <console type='pty'>
	I0229 17:53:04.603732   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <target type='serial' port='0'/>
	I0229 17:53:04.603736   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </console>
	I0229 17:53:04.603742   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     <rng model='virtio'>
	I0229 17:53:04.603747   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)       <backend model='random'>/dev/random</backend>
	I0229 17:53:04.603755   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     </rng>
	I0229 17:53:04.603760   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     
	I0229 17:53:04.603767   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)     
	I0229 17:53:04.603772   22516 main.go:141] libmachine: (ingress-addon-legacy-180742)   </devices>
	I0229 17:53:04.603779   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </domain>
	I0229 17:53:04.603785   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) 
	I0229 17:53:04.607811   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e3:ac:0d in network default
	I0229 17:53:04.608337   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring networks are active...
	I0229 17:53:04.608360   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:04.608966   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring network default is active
	I0229 17:53:04.609324   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring network mk-ingress-addon-legacy-180742 is active
	I0229 17:53:04.609818   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Getting domain xml...
	I0229 17:53:04.610460   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating domain...
	I0229 17:53:05.776964   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Waiting to get IP...
	I0229 17:53:05.777634   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:05.777992   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:05.778032   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:05.777979   22601 retry.go:31] will retry after 264.939748ms: waiting for machine to come up
	I0229 17:53:06.044475   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.044873   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.044902   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.044812   22601 retry.go:31] will retry after 265.069297ms: waiting for machine to come up
	I0229 17:53:06.310979   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.311344   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.311368   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.311305   22601 retry.go:31] will retry after 467.556262ms: waiting for machine to come up
	I0229 17:53:06.780770   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.781267   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:06.781291   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.781216   22601 retry.go:31] will retry after 421.595715ms: waiting for machine to come up
	I0229 17:53:07.204746   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:07.205135   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:07.205160   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:07.205096   22601 retry.go:31] will retry after 532.72974ms: waiting for machine to come up
	I0229 17:53:07.739784   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:07.740232   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:07.740256   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:07.740200   22601 retry.go:31] will retry after 618.789244ms: waiting for machine to come up
	I0229 17:53:08.360889   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:08.361282   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:08.361307   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:08.361241   22601 retry.go:31] will retry after 789.088812ms: waiting for machine to come up
	I0229 17:53:09.151658   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:09.152106   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:09.152122   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:09.152073   22601 retry.go:31] will retry after 1.087236245s: waiting for machine to come up
	I0229 17:53:10.241383   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:10.241721   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:10.241763   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:10.241710   22601 retry.go:31] will retry after 1.640986162s: waiting for machine to come up
	I0229 17:53:11.884465   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:11.884804   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:11.884830   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:11.884763   22601 retry.go:31] will retry after 1.591325231s: waiting for machine to come up
	I0229 17:53:13.477258   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:13.477643   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:13.477678   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:13.477607   22601 retry.go:31] will retry after 2.578096176s: waiting for machine to come up
	I0229 17:53:16.058742   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:16.059164   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:16.059192   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:16.059116   22601 retry.go:31] will retry after 2.779197081s: waiting for machine to come up
	I0229 17:53:18.841959   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:18.842485   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:18.842515   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:18.842448   22601 retry.go:31] will retry after 3.651517306s: waiting for machine to come up
	I0229 17:53:22.498334   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:22.498758   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
	I0229 17:53:22.498780   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:22.498724   22601 retry.go:31] will retry after 3.9256536s: waiting for machine to come up
	I0229 17:53:26.426923   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:26.427485   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Found IP for machine: 192.168.39.153
	I0229 17:53:26.427510   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has current primary IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:26.427526   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Reserving static IP address...
	I0229 17:53:26.427831   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-180742", mac: "52:54:00:e7:12:1e", ip: "192.168.39.153"} in network mk-ingress-addon-legacy-180742
	I0229 17:53:26.495527   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Getting to WaitForSSH function...
	I0229 17:53:26.495569   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Reserved static IP address: 192.168.39.153
	I0229 17:53:26.495627   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Waiting for SSH to be available...
	I0229 17:53:26.498107   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:26.498448   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742
	I0229 17:53:26.498472   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-180742 interface with MAC address 52:54:00:e7:12:1e
	I0229 17:53:26.498661   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH client type: external
	I0229 17:53:26.498689   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa (-rw-------)
	I0229 17:53:26.498723   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:53:26.498740   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | About to run SSH command:
	I0229 17:53:26.498771   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | exit 0
	I0229 17:53:26.502205   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | SSH cmd err, output: exit status 255: 
	I0229 17:53:26.502228   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 17:53:26.502249   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | command : exit 0
	I0229 17:53:26.502267   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | err     : exit status 255
	I0229 17:53:26.502279   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | output  : 
	I0229 17:53:29.502628   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Getting to WaitForSSH function...
	I0229 17:53:29.505700   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.506115   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:29.506149   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.506307   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH client type: external
	I0229 17:53:29.506339   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa (-rw-------)
	I0229 17:53:29.506370   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:53:29.506392   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | About to run SSH command:
	I0229 17:53:29.506419   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | exit 0
	I0229 17:53:29.630898   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | SSH cmd err, output: <nil>: 
	I0229 17:53:29.631119   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) KVM machine creation complete!
	I0229 17:53:29.631458   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
	I0229 17:53:29.632013   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:29.632178   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:29.632346   22516 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 17:53:29.632360   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
	I0229 17:53:29.633464   22516 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 17:53:29.633477   22516 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 17:53:29.633482   22516 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 17:53:29.633488   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:29.635606   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.635914   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:29.635940   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.636061   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:29.636220   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.636368   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.636515   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:29.636704   22516 main.go:141] libmachine: Using SSH client type: native
	I0229 17:53:29.636931   22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0229 17:53:29.636946   22516 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 17:53:29.746075   22516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:53:29.746097   22516 main.go:141] libmachine: Detecting the provisioner...
	I0229 17:53:29.746108   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:29.748636   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.748959   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:29.748990   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.749123   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:29.749306   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.749442   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.749565   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:29.749688   22516 main.go:141] libmachine: Using SSH client type: native
	I0229 17:53:29.749850   22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0229 17:53:29.749864   22516 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 17:53:29.859893   22516 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 17:53:29.859960   22516 main.go:141] libmachine: found compatible host: buildroot
	I0229 17:53:29.859971   22516 main.go:141] libmachine: Provisioning with buildroot...
	I0229 17:53:29.859980   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
	I0229 17:53:29.860270   22516 buildroot.go:166] provisioning hostname "ingress-addon-legacy-180742"
	I0229 17:53:29.860300   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
	I0229 17:53:29.860507   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:29.862886   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.863200   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:29.863234   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.863343   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:29.863523   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.863634   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.863763   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:29.863928   22516 main.go:141] libmachine: Using SSH client type: native
	I0229 17:53:29.864136   22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0229 17:53:29.864154   22516 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-180742 && echo "ingress-addon-legacy-180742" | sudo tee /etc/hostname
	I0229 17:53:29.985841   22516 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-180742
	
	I0229 17:53:29.985870   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:29.988295   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.988619   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:29.988654   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:29.988788   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:29.988984   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.989143   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:29.989262   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:29.989433   22516 main.go:141] libmachine: Using SSH client type: native
	I0229 17:53:29.989603   22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0229 17:53:29.989629   22516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-180742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-180742/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-180742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:53:30.104093   22516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:53:30.104117   22516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 17:53:30.104137   22516 buildroot.go:174] setting up certificates
	I0229 17:53:30.104146   22516 provision.go:83] configureAuth start
	I0229 17:53:30.104154   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
	I0229 17:53:30.104397   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
	I0229 17:53:30.106621   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.106955   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.106989   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.107088   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.109165   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.109456   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.109482   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.109615   22516 provision.go:138] copyHostCerts
	I0229 17:53:30.109647   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 17:53:30.109675   22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 17:53:30.109695   22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 17:53:30.109756   22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 17:53:30.109827   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 17:53:30.109844   22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 17:53:30.109851   22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 17:53:30.109873   22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 17:53:30.109916   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 17:53:30.109933   22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 17:53:30.109939   22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 17:53:30.109959   22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 17:53:30.110002   22516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-180742 san=[192.168.39.153 192.168.39.153 localhost 127.0.0.1 minikube ingress-addon-legacy-180742]
	I0229 17:53:30.474647   22516 provision.go:172] copyRemoteCerts
	I0229 17:53:30.474701   22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:53:30.474724   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.476954   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.477285   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.477313   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.477523   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:30.477704   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.477892   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:30.478025   22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:53:30.562275   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 17:53:30.562337   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 17:53:30.588008   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 17:53:30.588069   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0229 17:53:30.613000   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 17:53:30.613052   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 17:53:30.638094   22516 provision.go:86] duration metric: configureAuth took 533.938114ms
	I0229 17:53:30.638116   22516 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:53:30.638290   22516 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:53:30.638311   22516 main.go:141] libmachine: Checking connection to Docker...
	I0229 17:53:30.638322   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetURL
	I0229 17:53:30.639418   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using libvirt version 6000000
	I0229 17:53:30.641623   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.641917   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.641939   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.642123   22516 main.go:141] libmachine: Docker is up and running!
	I0229 17:53:30.642137   22516 main.go:141] libmachine: Reticulating splines...
	I0229 17:53:30.642144   22516 client.go:171] LocalClient.Create took 26.571253433s
	I0229 17:53:30.642171   22516 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-180742" took 26.571317685s
	I0229 17:53:30.642183   22516 start.go:300] post-start starting for "ingress-addon-legacy-180742" (driver="kvm2")
	I0229 17:53:30.642201   22516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:53:30.642229   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:30.642459   22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:53:30.642480   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.644553   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.644911   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.644942   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.645073   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:30.645224   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.645382   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:30.645486   22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:53:30.730727   22516 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:53:30.735592   22516 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 17:53:30.735611   22516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 17:53:30.735664   22516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 17:53:30.735742   22516 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 17:53:30.735753   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> /etc/ssl/certs/137212.pem
	I0229 17:53:30.735841   22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 17:53:30.747992   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 17:53:30.774002   22516 start.go:303] post-start completed in 131.804098ms
	I0229 17:53:30.774041   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
	I0229 17:53:30.774577   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
	I0229 17:53:30.777022   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.777351   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.777381   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.777573   22516 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json ...
	I0229 17:53:30.777734   22516 start.go:128] duration metric: createHost completed in 26.726064734s
	I0229 17:53:30.777753   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.779636   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.779904   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.779929   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.780066   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:30.780211   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.780365   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.780495   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:30.780656   22516 main.go:141] libmachine: Using SSH client type: native
	I0229 17:53:30.780816   22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.153 22 <nil> <nil>}
	I0229 17:53:30.780826   22516 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 17:53:30.887609   22516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709229210.861668051
	
	I0229 17:53:30.887628   22516 fix.go:206] guest clock: 1709229210.861668051
	I0229 17:53:30.887634   22516 fix.go:219] Guest: 2024-02-29 17:53:30.861668051 +0000 UTC Remote: 2024-02-29 17:53:30.777744277 +0000 UTC m=+51.186873393 (delta=83.923774ms)
	I0229 17:53:30.887652   22516 fix.go:190] guest clock delta is within tolerance: 83.923774ms
	I0229 17:53:30.887665   22516 start.go:83] releasing machines lock for "ingress-addon-legacy-180742", held for 26.836086747s
	I0229 17:53:30.887683   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:30.887920   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
	I0229 17:53:30.890493   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.890815   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.890830   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.890963   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:30.891420   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:30.891591   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:53:30.891655   22516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:53:30.891698   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.891831   22516 ssh_runner.go:195] Run: cat /version.json
	I0229 17:53:30.891856   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:53:30.894202   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.894273   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.894582   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.894607   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.894637   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:30.894664   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:30.894739   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:30.894889   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:53:30.894915   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.895053   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:53:30.895085   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:30.895153   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:53:30.895217   22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:53:30.895299   22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:53:30.999049   22516 ssh_runner.go:195] Run: systemctl --version
	I0229 17:53:31.005852   22516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 17:53:31.012144   22516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 17:53:31.012219   22516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 17:53:31.029685   22516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 17:53:31.029702   22516 start.go:475] detecting cgroup driver to use...
	I0229 17:53:31.029777   22516 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 17:53:31.067259   22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:53:31.082082   22516 docker.go:217] disabling cri-docker service (if available) ...
	I0229 17:53:31.082153   22516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 17:53:31.096972   22516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 17:53:31.112291   22516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 17:53:31.230375   22516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 17:53:31.372170   22516 docker.go:233] disabling docker service ...
	I0229 17:53:31.372230   22516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 17:53:31.388297   22516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 17:53:31.401433   22516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 17:53:31.535521   22516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 17:53:31.646072   22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 17:53:31.661978   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:53:31.682230   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 17:53:31.693153   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 17:53:31.703827   22516 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 17:53:31.703876   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 17:53:31.714461   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:53:31.725115   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 17:53:31.735600   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:53:31.746476   22516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:53:31.757382   22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 17:53:31.768023   22516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:53:31.777601   22516 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 17:53:31.777641   22516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 17:53:31.791791   22516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:53:31.802929   22516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:53:31.914961   22516 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 17:53:31.945264   22516 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 17:53:31.945353   22516 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 17:53:31.950161   22516 retry.go:31] will retry after 688.012804ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 17:53:32.639170   22516 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 17:53:32.645083   22516 start.go:543] Will wait 60s for crictl version
	I0229 17:53:32.645147   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:32.649303   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 17:53:32.684369   22516 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 17:53:32.684435   22516 ssh_runner.go:195] Run: containerd --version
	I0229 17:53:32.712421   22516 ssh_runner.go:195] Run: containerd --version
	I0229 17:53:32.741751   22516 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
	I0229 17:53:32.743063   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
	I0229 17:53:32.745366   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:32.745706   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:53:32.745735   22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:53:32.745896   22516 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 17:53:32.750397   22516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:53:32.763828   22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0229 17:53:32.763886   22516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:53:32.800132   22516 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 17:53:32.800210   22516 ssh_runner.go:195] Run: which lz4
	I0229 17:53:32.805142   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 17:53:32.805247   22516 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 17:53:32.809668   22516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 17:53:32.809699   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (494845061 bytes)
	I0229 17:53:34.632740   22516 containerd.go:548] Took 1.827522 seconds to copy over tarball
	I0229 17:53:34.632818   22516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 17:53:37.357730   22516 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.724878051s)
	I0229 17:53:37.357766   22516 containerd.go:555] Took 2.725004 seconds to extract the tarball
	I0229 17:53:37.357781   22516 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 17:53:37.404312   22516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:53:37.521269   22516 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 17:53:37.553529   22516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:53:37.591056   22516 retry.go:31] will retry after 221.455187ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T17:53:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 17:53:37.813513   22516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 17:53:37.853650   22516 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 17:53:37.853686   22516 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 17:53:37.853732   22516 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:53:37.853771   22516 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:53:37.853782   22516 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:53:37.853831   22516 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:53:37.853845   22516 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:53:37.853903   22516 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:53:37.853835   22516 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 17:53:37.854016   22516 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 17:53:37.855170   22516 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:53:37.855177   22516 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:53:37.855184   22516 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 17:53:37.855198   22516 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:53:37.855237   22516 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 17:53:37.855320   22516 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:53:37.855403   22516 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:53:37.855438   22516 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:53:38.058782   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0229 17:53:38.058846   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.084033   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346"
	I0229 17:53:38.084097   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.194683   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1"
	I0229 17:53:38.194758   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.204893   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba"
	I0229 17:53:38.204968   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.214621   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290"
	I0229 17:53:38.214688   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.237990   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5"
	I0229 17:53:38.238073   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.243755   22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
	I0229 17:53:38.243821   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:38.357070   22516 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 17:53:38.357106   22516 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0229 17:53:38.357157   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:38.590238   22516 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 17:53:38.590285   22516 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:53:38.590332   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.214141   22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.019365306s)
	I0229 17:53:39.214193   22516 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 17:53:39.214220   22516 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:53:39.214267   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.214752   22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.009765173s)
	I0229 17:53:39.214810   22516 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 17:53:39.214837   22516 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:53:39.214877   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.246335   22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.0316193s)
	I0229 17:53:39.246400   22516 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 17:53:39.246445   22516 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:53:39.246494   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.246824   22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.008721893s)
	I0229 17:53:39.246865   22516 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 17:53:39.246885   22516 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 17:53:39.246922   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.247262   22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.003422998s)
	I0229 17:53:39.247311   22516 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 17:53:39.247328   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0229 17:53:39.247342   22516 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:53:39.247366   22516 ssh_runner.go:195] Run: which crictl
	I0229 17:53:39.247367   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:53:39.247414   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:53:39.247429   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:53:39.251235   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:53:39.260307   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0229 17:53:39.389506   22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0229 17:53:39.389521   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 17:53:39.389567   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 17:53:39.389623   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 17:53:39.389643   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 17:53:39.389684   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 17:53:39.389742   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 17:53:39.424702   22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 17:53:39.789810   22516 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 17:53:39.789864   22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 17:53:39.984139   22516 cache_images.go:92] LoadImages completed in 2.130435683s
	W0229 17:53:39.984226   22516 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0229 17:53:39.984276   22516 ssh_runner.go:195] Run: sudo crictl info
	I0229 17:53:40.021613   22516 cni.go:84] Creating CNI manager for ""
	I0229 17:53:40.021637   22516 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 17:53:40.021651   22516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:53:40.021688   22516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.153 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-180742 NodeName:ingress-addon-legacy-180742 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 17:53:40.021849   22516 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-180742"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:53:40.021935   22516 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-180742 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 17:53:40.021987   22516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 17:53:40.032773   22516 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:53:40.032841   22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:53:40.043086   22516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0229 17:53:40.061645   22516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 17:53:40.079895   22516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2137 bytes)
	I0229 17:53:40.097693   22516 ssh_runner.go:195] Run: grep 192.168.39.153	control-plane.minikube.internal$ /etc/hosts
	I0229 17:53:40.101928   22516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:53:40.115363   22516 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742 for IP: 192.168.39.153
	I0229 17:53:40.115390   22516 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.115541   22516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 17:53:40.115593   22516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 17:53:40.115649   22516 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key
	I0229 17:53:40.115676   22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt with IP's: []
	I0229 17:53:40.283545   22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt ...
	I0229 17:53:40.283577   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt: {Name:mk35a83d8d385ec160686cf1ec74716b8a23de49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.283767   22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key ...
	I0229 17:53:40.283783   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key: {Name:mk8674d5d9bb0261a5ad50a34db3ee19436bf1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.283889   22516 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd
	I0229 17:53:40.283908   22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd with IP's: [192.168.39.153 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 17:53:40.785142   22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd ...
	I0229 17:53:40.785174   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd: {Name:mka38470ed0efd8cfe51c8a14236dbbac9952717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.785351   22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd ...
	I0229 17:53:40.785368   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd: {Name:mka1e7b8fd9707f1fa16d6add705e2b0c401d463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.785467   22516 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt
	I0229 17:53:40.785572   22516 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key
	I0229 17:53:40.785659   22516 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key
	I0229 17:53:40.785679   22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt with IP's: []
	I0229 17:53:40.967870   22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt ...
	I0229 17:53:40.967902   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt: {Name:mk6ab76f4fa1fe99f982bfe1389c2c74b27d9f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.968073   22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key ...
	I0229 17:53:40.968093   22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key: {Name:mk34aed281d82d8c6879fefc48888497c0319847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:40.968248   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 17:53:40.968272   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 17:53:40.968287   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 17:53:40.968301   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 17:53:40.968321   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 17:53:40.968337   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 17:53:40.968352   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 17:53:40.968371   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 17:53:40.968441   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 17:53:40.968492   22516 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 17:53:40.968507   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 17:53:40.968554   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 17:53:40.968587   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 17:53:40.968626   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 17:53:40.968679   22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 17:53:40.968728   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> /usr/share/ca-certificates/137212.pem
	I0229 17:53:40.968750   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:53:40.968768   22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem -> /usr/share/ca-certificates/13721.pem
	I0229 17:53:40.969372   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:53:40.996861   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 17:53:41.022226   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:53:41.048596   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 17:53:41.074303   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:53:41.099724   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 17:53:41.125524   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:53:41.151404   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 17:53:41.177029   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 17:53:41.202706   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:53:41.228110   22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 17:53:41.253439   22516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:53:41.271136   22516 ssh_runner.go:195] Run: openssl version
	I0229 17:53:41.277332   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 17:53:41.288560   22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 17:53:41.293511   22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 17:53:41.293560   22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 17:53:41.299702   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 17:53:41.310896   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 17:53:41.322034   22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 17:53:41.330042   22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 17:53:41.330079   22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 17:53:41.336052   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 17:53:41.347018   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:53:41.357995   22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:53:41.362852   22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:53:41.362885   22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:53:41.368676   22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:53:41.379612   22516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:53:41.384529   22516 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 17:53:41.384585   22516 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:53:41.384671   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 17:53:41.384742   22516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:53:41.423715   22516 cri.go:89] found id: ""
	I0229 17:53:41.423804   22516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:53:41.434425   22516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 17:53:41.445320   22516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:53:41.455176   22516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:53:41.455212   22516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:53:41.516228   22516 kubeadm.go:322] W0229 17:53:41.501196     836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:53:41.648416   22516 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:53:44.397997   22516 kubeadm.go:322] W0229 17:53:44.384900     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:53:44.399237   22516 kubeadm.go:322] W0229 17:53:44.386136     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:55:39.399521   22516 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:55:39.399622   22516 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 17:55:39.400981   22516 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:55:39.401076   22516 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:55:39.401151   22516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:55:39.401243   22516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:55:39.401361   22516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:55:39.401485   22516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:55:39.401582   22516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:55:39.401626   22516 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:55:39.401688   22516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:55:39.403371   22516 out.go:204]   - Generating certificates and keys ...
	I0229 17:55:39.403444   22516 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:55:39.403506   22516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:55:39.403573   22516 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 17:55:39.403658   22516 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 17:55:39.403745   22516 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 17:55:39.403835   22516 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 17:55:39.403915   22516 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 17:55:39.404061   22516 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	I0229 17:55:39.404110   22516 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 17:55:39.404222   22516 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	I0229 17:55:39.404296   22516 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 17:55:39.404379   22516 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 17:55:39.404420   22516 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 17:55:39.404468   22516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:55:39.404515   22516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:55:39.404563   22516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:55:39.404617   22516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:55:39.404664   22516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:55:39.404727   22516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:55:39.406038   22516 out.go:204]   - Booting up control plane ...
	I0229 17:55:39.406129   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:55:39.406214   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:55:39.406275   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:55:39.406344   22516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:55:39.406474   22516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:55:39.406520   22516 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:55:39.406596   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:39.406769   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:39.406874   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:39.407072   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:39.407144   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:39.407324   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:39.407405   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:39.407594   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:39.407655   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:55:39.407824   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:55:39.407844   22516 kubeadm.go:322] 
	I0229 17:55:39.407905   22516 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:55:39.407955   22516 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:55:39.407963   22516 kubeadm.go:322] 
	I0229 17:55:39.407991   22516 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:55:39.408021   22516 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:55:39.408110   22516 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:55:39.408117   22516 kubeadm.go:322] 
	I0229 17:55:39.408207   22516 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:55:39.408242   22516 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:55:39.408271   22516 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:55:39.408277   22516 kubeadm.go:322] 
	I0229 17:55:39.408397   22516 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:55:39.408506   22516 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:55:39.408525   22516 kubeadm.go:322] 
	I0229 17:55:39.408651   22516 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 17:55:39.408779   22516 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0229 17:55:39.408888   22516 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:55:39.408985   22516 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0229 17:55:39.409027   22516 kubeadm.go:322] 
	W0229 17:55:39.409102   22516 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:53:41.501196     836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:53:44.384900     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:53:44.386136     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:53:41.501196     836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:53:44.384900     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:53:44.386136     836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 17:55:39.409144   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 17:55:39.898425   22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:55:39.913950   22516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:55:39.924440   22516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:55:39.924480   22516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:55:39.986793   22516 kubeadm.go:322] W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:55:40.124210   22516 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:55:41.051999   22516 kubeadm.go:322] W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:55:41.053452   22516 kubeadm.go:322] W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:57:36.062335   22516 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:57:36.062481   22516 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 17:57:36.063939   22516 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:57:36.064012   22516 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:57:36.064124   22516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:57:36.064262   22516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:57:36.064399   22516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:57:36.064530   22516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:57:36.064639   22516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:57:36.064705   22516 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:57:36.064799   22516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:57:36.066659   22516 out.go:204]   - Generating certificates and keys ...
	I0229 17:57:36.066741   22516 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:57:36.066830   22516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:57:36.066922   22516 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 17:57:36.066979   22516 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 17:57:36.067044   22516 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 17:57:36.067089   22516 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 17:57:36.067148   22516 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 17:57:36.067238   22516 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 17:57:36.067346   22516 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 17:57:36.067443   22516 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 17:57:36.067499   22516 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 17:57:36.067579   22516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:57:36.067651   22516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:57:36.067714   22516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:57:36.067768   22516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:57:36.067814   22516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:57:36.067868   22516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:57:36.069742   22516 out.go:204]   - Booting up control plane ...
	I0229 17:57:36.069810   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:57:36.069873   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:57:36.069944   22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:57:36.070033   22516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:57:36.070169   22516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:57:36.070217   22516 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:57:36.070288   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:57:36.070486   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:57:36.070621   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:57:36.070808   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:57:36.070874   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:57:36.071027   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:57:36.071086   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:57:36.071238   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:57:36.071301   22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:57:36.071460   22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:57:36.071474   22516 kubeadm.go:322] 
	I0229 17:57:36.071534   22516 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:57:36.071592   22516 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:57:36.071602   22516 kubeadm.go:322] 
	I0229 17:57:36.071656   22516 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:57:36.071712   22516 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:57:36.071820   22516 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:57:36.071828   22516 kubeadm.go:322] 
	I0229 17:57:36.071929   22516 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:57:36.071978   22516 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:57:36.072026   22516 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:57:36.072035   22516 kubeadm.go:322] 
	I0229 17:57:36.072146   22516 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:57:36.072241   22516 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:57:36.072249   22516 kubeadm.go:322] 
	I0229 17:57:36.072340   22516 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 17:57:36.072426   22516 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0229 17:57:36.072489   22516 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:57:36.072555   22516 kubeadm.go:322] 		- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	I0229 17:57:36.072601   22516 kubeadm.go:322] 
	I0229 17:57:36.072608   22516 kubeadm.go:406] StartCluster complete in 3m54.688031218s
	I0229 17:57:36.072639   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 17:57:36.072695   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 17:57:36.119684   22516 cri.go:89] found id: ""
	I0229 17:57:36.119708   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.119717   22516 logs.go:278] No container was found matching "kube-apiserver"
	I0229 17:57:36.119724   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 17:57:36.119783   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 17:57:36.165725   22516 cri.go:89] found id: ""
	I0229 17:57:36.165748   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.165758   22516 logs.go:278] No container was found matching "etcd"
	I0229 17:57:36.165766   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 17:57:36.165821   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 17:57:36.216132   22516 cri.go:89] found id: ""
	I0229 17:57:36.216161   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.216172   22516 logs.go:278] No container was found matching "coredns"
	I0229 17:57:36.216179   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 17:57:36.216240   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 17:57:36.266689   22516 cri.go:89] found id: ""
	I0229 17:57:36.266717   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.266727   22516 logs.go:278] No container was found matching "kube-scheduler"
	I0229 17:57:36.266734   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 17:57:36.266800   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 17:57:36.308867   22516 cri.go:89] found id: ""
	I0229 17:57:36.308891   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.308898   22516 logs.go:278] No container was found matching "kube-proxy"
	I0229 17:57:36.308903   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 17:57:36.308948   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 17:57:36.346038   22516 cri.go:89] found id: ""
	I0229 17:57:36.346064   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.346073   22516 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 17:57:36.346080   22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 17:57:36.346149   22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 17:57:36.383537   22516 cri.go:89] found id: ""
	I0229 17:57:36.383564   22516 logs.go:276] 0 containers: []
	W0229 17:57:36.383571   22516 logs.go:278] No container was found matching "kindnet"
	I0229 17:57:36.383580   22516 logs.go:123] Gathering logs for kubelet ...
	I0229 17:57:36.383592   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 17:57:36.411142   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664    6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.416761   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416    6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.421811   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750    6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.426624   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:31 ingress-addon-legacy-180742 kubelet[6076]: F0229 17:57:31.748210    6076 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.431407   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:33 ingress-addon-legacy-180742 kubelet[6099]: F0229 17:57:33.005862    6099 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.436197   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:34 ingress-addon-legacy-180742 kubelet[6124]: F0229 17:57:34.247151    6124 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:57:36.441013   22516 logs.go:138] Found kubelet problem: Feb 29 17:57:35 ingress-addon-legacy-180742 kubelet[6148]: F0229 17:57:35.485727    6148 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:57:36.443629   22516 logs.go:123] Gathering logs for dmesg ...
	I0229 17:57:36.443644   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 17:57:36.458261   22516 logs.go:123] Gathering logs for describe nodes ...
	I0229 17:57:36.458282   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 17:57:36.526123   22516 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 17:57:36.526143   22516 logs.go:123] Gathering logs for containerd ...
	I0229 17:57:36.526160   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 17:57:36.567189   22516 logs.go:123] Gathering logs for container status ...
	I0229 17:57:36.567225   22516 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 17:57:36.638180   22516 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 17:57:36.638233   22516 out.go:239] * 
	* 
	W0229 17:57:36.638317   22516 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:57:36.638342   22516 out.go:239] * 
	* 
	W0229 17:57:36.639322   22516 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 17:57:36.641894   22516 out.go:177] X Problems detected in kubelet:
	I0229 17:57:36.643825   22516 out.go:177]   Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664    6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:57:36.645243   22516 out.go:177]   Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416    6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:57:36.646647   22516 out.go:177]   Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750    6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:57:36.649232   22516 out.go:177] 
	W0229 17:57:36.650532   22516 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
	W0229 17:55:39.980404    3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:41.045820    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:41.047318    3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:57:36.650604   22516 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 17:57:36.650640   22516 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 17:57:36.652209   22516 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (297.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (112.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-180742 addons enable ingress --alsologtostderr -v=5
E0229 17:57:55.672763   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:59:17.593635   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-180742 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m51.977975631s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:57:36.772838   23439 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:57:36.773111   23439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:57:36.773120   23439 out.go:304] Setting ErrFile to fd 2...
	I0229 17:57:36.773124   23439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:57:36.773301   23439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:57:36.773599   23439 mustload.go:65] Loading cluster: ingress-addon-legacy-180742
	I0229 17:57:36.773896   23439 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:57:36.773913   23439 addons.go:597] checking whether the cluster is paused
	I0229 17:57:36.773989   23439 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:57:36.774000   23439 host.go:66] Checking if "ingress-addon-legacy-180742" exists ...
	I0229 17:57:36.774391   23439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:57:36.774427   23439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:57:36.788913   23439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0229 17:57:36.789405   23439 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:57:36.789904   23439 main.go:141] libmachine: Using API Version  1
	I0229 17:57:36.789926   23439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:57:36.790244   23439 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:57:36.790401   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
	I0229 17:57:36.792055   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:57:36.792252   23439 ssh_runner.go:195] Run: systemctl --version
	I0229 17:57:36.792272   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:57:36.794401   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:57:36.794814   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:57:36.794847   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:57:36.794969   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:57:36.795107   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:57:36.795257   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:57:36.795372   23439 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:57:36.877382   23439 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 17:57:36.877469   23439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:57:36.916751   23439 cri.go:89] found id: ""
	I0229 17:57:36.916800   23439 main.go:141] libmachine: Making call to close driver server
	I0229 17:57:36.916812   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 17:57:36.917085   23439 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:57:36.917097   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Closing plugin on server side
	I0229 17:57:36.917102   23439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:57:36.919526   23439 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 17:57:36.920926   23439 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:57:36.920940   23439 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-180742"
	I0229 17:57:36.920946   23439 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-180742"
	I0229 17:57:36.920974   23439 host.go:66] Checking if "ingress-addon-legacy-180742" exists ...
	I0229 17:57:36.921203   23439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:57:36.921241   23439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:57:36.935253   23439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0229 17:57:36.935610   23439 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:57:36.936055   23439 main.go:141] libmachine: Using API Version  1
	I0229 17:57:36.936075   23439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:57:36.936426   23439 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:57:36.936868   23439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:57:36.936925   23439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:57:36.950209   23439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0229 17:57:36.950534   23439 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:57:36.950979   23439 main.go:141] libmachine: Using API Version  1
	I0229 17:57:36.951003   23439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:57:36.951284   23439 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:57:36.951441   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
	I0229 17:57:36.952920   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:57:36.954702   23439 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:57:36.956116   23439 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:57:36.957375   23439 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 17:57:36.958752   23439 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:57:36.958767   23439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 17:57:36.958786   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:57:36.961176   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:57:36.961502   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:57:36.961530   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:57:36.961662   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:57:36.961829   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:57:36.961961   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:57:36.962092   23439 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:57:37.053697   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:37.118594   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:37.118629   23439 retry.go:31] will retry after 246.4439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:37.366129   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:37.465199   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:37.465243   23439 retry.go:31] will retry after 433.243586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:37.898851   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:37.964343   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:37.964380   23439 retry.go:31] will retry after 355.035974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:38.320031   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:38.384790   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:38.384825   23439 retry.go:31] will retry after 590.384964ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:38.975623   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:39.046475   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:39.046504   23439 retry.go:31] will retry after 1.786723793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:40.834490   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:40.904598   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:40.904626   23439 retry.go:31] will retry after 1.964510111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:42.869912   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:42.934835   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:42.934875   23439 retry.go:31] will retry after 2.786222596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:45.723828   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:45.787402   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:45.787437   23439 retry.go:31] will retry after 4.63798823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:50.426697   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:50.492803   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:50.492832   23439 retry.go:31] will retry after 7.974151223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:58.467962   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:57:58.557648   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:58.557679   23439 retry.go:31] will retry after 12.680737602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:11.238672   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:11.337276   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:11.337315   23439 retry.go:31] will retry after 17.370436354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:28.708740   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:28.776300   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:28.776332   23439 retry.go:31] will retry after 13.98223158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:42.758754   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:58:42.827530   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:58:42.827564   23439 retry.go:31] will retry after 45.778448989s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:28.607626   23439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:59:28.675734   23439 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:28.675798   23439 main.go:141] libmachine: Making call to close driver server
	I0229 17:59:28.675812   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 17:59:28.676074   23439 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:59:28.676088   23439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:59:28.676073   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Closing plugin on server side
	I0229 17:59:28.676097   23439 main.go:141] libmachine: Making call to close driver server
	I0229 17:59:28.676107   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 17:59:28.676355   23439 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:59:28.676372   23439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:59:28.676367   23439 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Closing plugin on server side
	I0229 17:59:28.676388   23439 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-180742"
	I0229 17:59:28.678351   23439 out.go:177] * Verifying ingress addon...
	I0229 17:59:28.680644   23439 out.go:177] 
	W0229 17:59:28.682132   23439 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-180742" does not exist: client config: context "ingress-addon-legacy-180742" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-180742" does not exist: client config: context "ingress-addon-legacy-180742" does not exist]
	W0229 17:59:28.682147   23439 out.go:239] * 
	* 
	W0229 17:59:28.684266   23439 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 17:59:28.686029   23439 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742: exit status 6 (240.233837ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 17:59:28.915305   23729 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-180742" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-180742" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (112.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (91.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-180742 addons enable ingress-dns --alsologtostderr -v=5
E0229 17:59:42.039914   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-180742 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m31.434491258s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:59:28.981098   23762 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:59:28.981262   23762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:59:28.981272   23762 out.go:304] Setting ErrFile to fd 2...
	I0229 17:59:28.981277   23762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:59:28.981457   23762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:59:28.981696   23762 mustload.go:65] Loading cluster: ingress-addon-legacy-180742
	I0229 17:59:28.982017   23762 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:59:28.982033   23762 addons.go:597] checking whether the cluster is paused
	I0229 17:59:28.982107   23762 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:59:28.982118   23762 host.go:66] Checking if "ingress-addon-legacy-180742" exists ...
	I0229 17:59:28.982463   23762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:59:28.982510   23762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:59:28.996546   23762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0229 17:59:28.996961   23762 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:59:28.997508   23762 main.go:141] libmachine: Using API Version  1
	I0229 17:59:28.997534   23762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:59:28.997860   23762 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:59:28.998034   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
	I0229 17:59:28.999406   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:59:28.999616   23762 ssh_runner.go:195] Run: systemctl --version
	I0229 17:59:28.999640   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:59:29.001707   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:59:29.002036   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:59:29.002060   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:59:29.002161   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:59:29.002324   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:59:29.002457   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:59:29.002597   23762 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:59:29.081073   23762 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 17:59:29.081153   23762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 17:59:29.122611   23762 cri.go:89] found id: ""
	I0229 17:59:29.122698   23762 main.go:141] libmachine: Making call to close driver server
	I0229 17:59:29.122725   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 17:59:29.122981   23762 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:59:29.123037   23762 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:59:29.123006   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Closing plugin on server side
	I0229 17:59:29.125425   23762 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 17:59:29.126848   23762 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0229 17:59:29.126862   23762 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-180742"
	I0229 17:59:29.126871   23762 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-180742"
	I0229 17:59:29.126912   23762 host.go:66] Checking if "ingress-addon-legacy-180742" exists ...
	I0229 17:59:29.127215   23762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:59:29.127251   23762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:59:29.141168   23762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0229 17:59:29.141509   23762 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:59:29.141910   23762 main.go:141] libmachine: Using API Version  1
	I0229 17:59:29.141929   23762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:59:29.142236   23762 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:59:29.142680   23762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:59:29.142712   23762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:59:29.156110   23762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0229 17:59:29.156442   23762 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:59:29.156797   23762 main.go:141] libmachine: Using API Version  1
	I0229 17:59:29.156818   23762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:59:29.157125   23762 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:59:29.157296   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
	I0229 17:59:29.158770   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
	I0229 17:59:29.160298   23762 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 17:59:29.161407   23762 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:59:29.161419   23762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 17:59:29.161433   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
	I0229 17:59:29.163950   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:59:29.164329   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
	I0229 17:59:29.164360   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
	I0229 17:59:29.164461   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
	I0229 17:59:29.164616   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
	I0229 17:59:29.164760   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
	I0229 17:59:29.164880   23762 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
	I0229 17:59:29.257782   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:29.323500   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:29.323534   23762 retry.go:31] will retry after 276.88135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:29.601060   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:29.697171   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:29.697200   23762 retry.go:31] will retry after 211.380022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:29.909667   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:29.997305   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:29.997338   23762 retry.go:31] will retry after 728.553636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:30.726265   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:30.832369   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:30.832413   23762 retry.go:31] will retry after 803.304345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:31.636446   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:31.701254   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:31.701288   23762 retry.go:31] will retry after 1.752624931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:33.455270   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:33.547815   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:33.547848   23762 retry.go:31] will retry after 1.957271877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:35.505901   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:35.617362   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:35.617398   23762 retry.go:31] will retry after 2.852769184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:38.472439   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:38.539086   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:38.539120   23762 retry.go:31] will retry after 4.419318903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:42.962028   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:43.025663   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:43.025688   23762 retry.go:31] will retry after 7.997367901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:51.024911   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:59:51.115141   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:59:51.115180   23762 retry.go:31] will retry after 10.863327311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:01.981308   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:02.049132   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:02.049164   23762 retry.go:31] will retry after 12.84868826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:14.898023   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:14.962729   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:14.962759   23762 retry.go:31] will retry after 24.133011673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:39.096283   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:00:39.186668   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:00:39.186717   23762 retry.go:31] will retry after 21.101642638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:01:00.288792   23762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:01:00.354107   23762 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:01:00.354170   23762 main.go:141] libmachine: Making call to close driver server
	I0229 18:01:00.354181   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 18:01:00.354488   23762 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:01:00.354507   23762 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:01:00.354515   23762 main.go:141] libmachine: Making call to close driver server
	I0229 18:01:00.354536   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Close
	I0229 18:01:00.354540   23762 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Closing plugin on server side
	I0229 18:01:00.354755   23762 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:01:00.354772   23762 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:01:00.357327   23762 out.go:177] 
	W0229 18:01:00.358748   23762 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 18:01:00.358766   23762 out.go:239] * 
	* 
	W0229 18:01:00.360744   23762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:01:00.362120   23762 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742: exit status 6 (227.00995ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:01:00.578537   24001 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-180742" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-180742" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (91.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-180742 -n ingress-addon-legacy-180742: exit status 6 (224.680812ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:01:00.803439   24031 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-180742" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-180742" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109 (4m53.938698224s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-907979] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-907979 in cluster kubernetes-upgrade-907979
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:30:53.539370   38858 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:30:53.539472   38858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:30:53.539486   38858 out.go:304] Setting ErrFile to fd 2...
	I0229 18:30:53.539493   38858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:30:53.539755   38858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:30:53.540262   38858 out.go:298] Setting JSON to false
	I0229 18:30:53.541223   38858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4395,"bootTime":1709227059,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:30:53.541286   38858 start.go:139] virtualization: kvm guest
	I0229 18:30:53.543812   38858 out.go:177] * [kubernetes-upgrade-907979] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:30:53.545338   38858 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:30:53.545360   38858 notify.go:220] Checking for updates...
	I0229 18:30:53.546900   38858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:30:53.548298   38858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:30:53.549599   38858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:30:53.550748   38858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:30:53.551838   38858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:30:53.553328   38858 config.go:182] Loaded profile config "NoKubernetes-388162": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0229 18:30:53.553410   38858 config.go:182] Loaded profile config "cert-expiration-829233": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:30:53.553487   38858 config.go:182] Loaded profile config "cert-options-153536": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:30:53.553573   38858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:30:53.590090   38858 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:30:53.591360   38858 start.go:299] selected driver: kvm2
	I0229 18:30:53.591385   38858 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:30:53.591396   38858 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:30:53.592287   38858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:30:53.592373   38858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:30:53.607428   38858 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:30:53.607478   38858 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:30:53.607720   38858 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 18:30:53.607818   38858 cni.go:84] Creating CNI manager for ""
	I0229 18:30:53.607837   38858 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:30:53.607848   38858 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:30:53.607857   38858 start_flags.go:323] config:
	{Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:30:53.608067   38858 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:30:53.610688   38858 out.go:177] * Starting control plane node kubernetes-upgrade-907979 in cluster kubernetes-upgrade-907979
	I0229 18:30:53.612002   38858 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:30:53.612047   38858 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 18:30:53.612055   38858 cache.go:56] Caching tarball of preloaded images
	I0229 18:30:53.612161   38858 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:30:53.612175   38858 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 18:30:53.612304   38858 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/config.json ...
	I0229 18:30:53.612328   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/config.json: {Name:mkbb8e491907395ae8c284ea0f5047c273d08aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:30:53.612498   38858 start.go:365] acquiring machines lock for kubernetes-upgrade-907979: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:31:14.995568   38858 start.go:369] acquired machines lock for "kubernetes-upgrade-907979" in 21.383009462s
	I0229 18:31:14.995648   38858 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:31:14.995772   38858 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:31:14.998479   38858 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:31:14.998747   38858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:31:14.998796   38858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:31:15.018367   38858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
	I0229 18:31:15.018790   38858 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:31:15.019322   38858 main.go:141] libmachine: Using API Version  1
	I0229 18:31:15.019343   38858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:31:15.019740   38858 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:31:15.019918   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:31:15.020081   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:15.020227   38858 start.go:159] libmachine.API.Create for "kubernetes-upgrade-907979" (driver="kvm2")
	I0229 18:31:15.020278   38858 client.go:168] LocalClient.Create starting
	I0229 18:31:15.020311   38858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 18:31:15.020394   38858 main.go:141] libmachine: Decoding PEM data...
	I0229 18:31:15.020416   38858 main.go:141] libmachine: Parsing certificate...
	I0229 18:31:15.020489   38858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 18:31:15.020517   38858 main.go:141] libmachine: Decoding PEM data...
	I0229 18:31:15.020534   38858 main.go:141] libmachine: Parsing certificate...
	I0229 18:31:15.020559   38858 main.go:141] libmachine: Running pre-create checks...
	I0229 18:31:15.020577   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .PreCreateCheck
	I0229 18:31:15.021487   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetConfigRaw
	I0229 18:31:15.023829   38858 main.go:141] libmachine: Creating machine...
	I0229 18:31:15.023849   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Create
	I0229 18:31:15.023991   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Creating KVM machine...
	I0229 18:31:15.025857   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found existing default KVM network
	I0229 18:31:15.027257   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.027115   41116 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:16:be} reservation:<nil>}
	I0229 18:31:15.028412   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.028318   41116 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027e5b0}
	I0229 18:31:15.034904   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | trying to create private KVM network mk-kubernetes-upgrade-907979 192.168.50.0/24...
	I0229 18:31:15.105602   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | private KVM network mk-kubernetes-upgrade-907979 192.168.50.0/24 created
	I0229 18:31:15.105638   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.105563   41116 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:31:15.105653   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979 ...
	I0229 18:31:15.105680   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:31:15.105732   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:31:15.339562   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.339447   41116 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa...
	I0229 18:31:15.589282   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.589175   41116 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/kubernetes-upgrade-907979.rawdisk...
	I0229 18:31:15.589319   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Writing magic tar header
	I0229 18:31:15.589422   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Writing SSH key tar header
	I0229 18:31:15.589464   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:15.589287   41116 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979 ...
	I0229 18:31:15.589483   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979 (perms=drwx------)
	I0229 18:31:15.589507   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:31:15.589518   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 18:31:15.589532   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 18:31:15.589545   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979
	I0229 18:31:15.589555   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 18:31:15.589569   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:31:15.589587   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:31:15.589600   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Creating domain...
	I0229 18:31:15.589619   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:31:15.589639   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 18:31:15.589653   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:31:15.589665   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:31:15.589677   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Checking permissions on dir: /home
	I0229 18:31:15.589688   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Skipping /home - not owner
	I0229 18:31:15.590804   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) define libvirt domain using xml: 
	I0229 18:31:15.590833   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) <domain type='kvm'>
	I0229 18:31:15.590846   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <name>kubernetes-upgrade-907979</name>
	I0229 18:31:15.590854   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <memory unit='MiB'>2200</memory>
	I0229 18:31:15.590863   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <vcpu>2</vcpu>
	I0229 18:31:15.590870   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <features>
	I0229 18:31:15.590878   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <acpi/>
	I0229 18:31:15.590885   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <apic/>
	I0229 18:31:15.590893   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <pae/>
	I0229 18:31:15.590901   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     
	I0229 18:31:15.590909   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   </features>
	I0229 18:31:15.590917   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <cpu mode='host-passthrough'>
	I0229 18:31:15.590942   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   
	I0229 18:31:15.590957   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   </cpu>
	I0229 18:31:15.590966   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <os>
	I0229 18:31:15.590974   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <type>hvm</type>
	I0229 18:31:15.590984   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <boot dev='cdrom'/>
	I0229 18:31:15.590991   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <boot dev='hd'/>
	I0229 18:31:15.591006   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <bootmenu enable='no'/>
	I0229 18:31:15.591017   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   </os>
	I0229 18:31:15.591031   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   <devices>
	I0229 18:31:15.591047   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <disk type='file' device='cdrom'>
	I0229 18:31:15.591071   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/boot2docker.iso'/>
	I0229 18:31:15.591084   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <target dev='hdc' bus='scsi'/>
	I0229 18:31:15.591097   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <readonly/>
	I0229 18:31:15.591108   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </disk>
	I0229 18:31:15.591122   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <disk type='file' device='disk'>
	I0229 18:31:15.591154   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:31:15.591177   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/kubernetes-upgrade-907979.rawdisk'/>
	I0229 18:31:15.591188   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <target dev='hda' bus='virtio'/>
	I0229 18:31:15.591200   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </disk>
	I0229 18:31:15.591212   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <interface type='network'>
	I0229 18:31:15.591225   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <source network='mk-kubernetes-upgrade-907979'/>
	I0229 18:31:15.591236   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <model type='virtio'/>
	I0229 18:31:15.591246   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </interface>
	I0229 18:31:15.591257   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <interface type='network'>
	I0229 18:31:15.591269   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <source network='default'/>
	I0229 18:31:15.591283   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <model type='virtio'/>
	I0229 18:31:15.591292   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </interface>
	I0229 18:31:15.591301   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <serial type='pty'>
	I0229 18:31:15.591310   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <target port='0'/>
	I0229 18:31:15.591321   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </serial>
	I0229 18:31:15.591333   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <console type='pty'>
	I0229 18:31:15.591343   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <target type='serial' port='0'/>
	I0229 18:31:15.591352   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </console>
	I0229 18:31:15.591361   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     <rng model='virtio'>
	I0229 18:31:15.591372   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)       <backend model='random'>/dev/random</backend>
	I0229 18:31:15.591379   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     </rng>
	I0229 18:31:15.591399   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     
	I0229 18:31:15.591406   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)     
	I0229 18:31:15.591426   38858 main.go:141] libmachine: (kubernetes-upgrade-907979)   </devices>
	I0229 18:31:15.591435   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) </domain>
	I0229 18:31:15.591449   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) 
	I0229 18:31:15.596139   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:5b:c9:6a in network default
	I0229 18:31:15.596882   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Ensuring networks are active...
	I0229 18:31:15.596899   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:15.597743   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Ensuring network default is active
	I0229 18:31:15.598057   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Ensuring network mk-kubernetes-upgrade-907979 is active
	I0229 18:31:15.598639   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Getting domain xml...
	I0229 18:31:15.599391   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Creating domain...
	I0229 18:31:16.912968   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Waiting to get IP...
	I0229 18:31:16.913690   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:16.914081   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:16.914159   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:16.914070   41116 retry.go:31] will retry after 226.1727ms: waiting for machine to come up
	I0229 18:31:17.141528   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.142116   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.142143   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:17.142047   41116 retry.go:31] will retry after 381.718334ms: waiting for machine to come up
	I0229 18:31:17.525646   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.526160   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.526187   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:17.526100   41116 retry.go:31] will retry after 443.851427ms: waiting for machine to come up
	I0229 18:31:17.971933   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.972536   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:17.972585   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:17.972483   41116 retry.go:31] will retry after 461.624676ms: waiting for machine to come up
	I0229 18:31:18.436164   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:18.436713   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:18.436739   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:18.436658   41116 retry.go:31] will retry after 718.371231ms: waiting for machine to come up
	I0229 18:31:19.156523   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:19.157104   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:19.157136   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:19.157048   41116 retry.go:31] will retry after 660.733458ms: waiting for machine to come up
	I0229 18:31:19.819737   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:19.820308   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:19.820332   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:19.820253   41116 retry.go:31] will retry after 1.012162675s: waiting for machine to come up
	I0229 18:31:20.834326   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:20.834910   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:20.834934   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:20.834859   41116 retry.go:31] will retry after 1.460024823s: waiting for machine to come up
	I0229 18:31:22.296868   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:22.297493   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:22.297524   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:22.297449   41116 retry.go:31] will retry after 1.354138655s: waiting for machine to come up
	I0229 18:31:23.653294   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:23.653858   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:23.653885   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:23.653817   41116 retry.go:31] will retry after 2.131626152s: waiting for machine to come up
	I0229 18:31:25.787050   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:25.787553   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:25.787594   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:25.787503   41116 retry.go:31] will retry after 2.296070285s: waiting for machine to come up
	I0229 18:31:28.085040   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:28.085596   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:28.085624   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:28.085556   41116 retry.go:31] will retry after 3.59834663s: waiting for machine to come up
	I0229 18:31:31.685229   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:31.685667   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:31.685698   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:31.685611   41116 retry.go:31] will retry after 2.987970607s: waiting for machine to come up
	I0229 18:31:34.675846   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:34.676410   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find current IP address of domain kubernetes-upgrade-907979 in network mk-kubernetes-upgrade-907979
	I0229 18:31:34.676460   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | I0229 18:31:34.676373   41116 retry.go:31] will retry after 4.733055177s: waiting for machine to come up
	I0229 18:31:39.414110   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.414790   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has current primary IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.414816   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Found IP for machine: 192.168.50.115
	I0229 18:31:39.414832   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Reserving static IP address...
	I0229 18:31:39.415212   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-907979", mac: "52:54:00:24:d4:e4", ip: "192.168.50.115"} in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.485979   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Getting to WaitForSSH function...
	I0229 18:31:39.486014   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Reserved static IP address: 192.168.50.115
	I0229 18:31:39.486028   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Waiting for SSH to be available...
	I0229 18:31:39.488735   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.489227   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:39.489261   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.489341   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Using SSH client type: external
	I0229 18:31:39.489354   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa (-rw-------)
	I0229 18:31:39.489393   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:31:39.489405   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | About to run SSH command:
	I0229 18:31:39.489447   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | exit 0
	I0229 18:31:39.610723   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | SSH cmd err, output: <nil>: 
	I0229 18:31:39.610980   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) KVM machine creation complete!
	I0229 18:31:39.611310   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetConfigRaw
	I0229 18:31:39.611829   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:39.612024   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:39.612224   38858 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:31:39.612246   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetState
	I0229 18:31:39.613396   38858 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:31:39.613412   38858 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:31:39.613421   38858 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:31:39.613428   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:39.615643   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.616046   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:39.616090   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.616209   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:39.616366   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.616522   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.616614   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:39.616776   38858 main.go:141] libmachine: Using SSH client type: native
	I0229 18:31:39.616987   38858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:31:39.617002   38858 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:31:39.718203   38858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:31:39.718226   38858 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:31:39.718233   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:39.720948   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.721266   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:39.721300   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.721482   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:39.721686   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.721859   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.721977   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:39.722136   38858 main.go:141] libmachine: Using SSH client type: native
	I0229 18:31:39.722306   38858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:31:39.722316   38858 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:31:39.823701   38858 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:31:39.823823   38858 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:31:39.823838   38858 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:31:39.823849   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:31:39.824090   38858 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907979"
	I0229 18:31:39.824114   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:31:39.824318   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:39.826681   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.827039   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:39.827066   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.827285   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:39.827470   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.827657   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.827843   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:39.827992   38858 main.go:141] libmachine: Using SSH client type: native
	I0229 18:31:39.828247   38858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:31:39.828267   38858 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907979 && echo "kubernetes-upgrade-907979" | sudo tee /etc/hostname
	I0229 18:31:39.947463   38858 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907979
	
	I0229 18:31:39.947490   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:39.950288   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.950648   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:39.950680   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:39.950846   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:39.951044   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.951231   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:39.951375   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:39.951529   38858 main.go:141] libmachine: Using SSH client type: native
	I0229 18:31:39.951744   38858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:31:39.951762   38858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907979/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:31:40.062197   38858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:31:40.062236   38858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:31:40.062294   38858 buildroot.go:174] setting up certificates
	I0229 18:31:40.062312   38858 provision.go:83] configureAuth start
	I0229 18:31:40.062335   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:31:40.062577   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:31:40.065064   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.065387   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.065418   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.065529   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.068053   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.068561   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.068590   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.068767   38858 provision.go:138] copyHostCerts
	I0229 18:31:40.068825   38858 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:31:40.068845   38858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:31:40.068914   38858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:31:40.069003   38858 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:31:40.069014   38858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:31:40.069034   38858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:31:40.069087   38858 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:31:40.069096   38858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:31:40.069112   38858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:31:40.069154   38858 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907979 san=[192.168.50.115 192.168.50.115 localhost 127.0.0.1 minikube kubernetes-upgrade-907979]
	I0229 18:31:40.331182   38858 provision.go:172] copyRemoteCerts
	I0229 18:31:40.331240   38858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:31:40.331263   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.334012   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.334381   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.334411   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.334615   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:40.334804   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.334959   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:40.335100   38858 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:31:40.417198   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:31:40.444472   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:31:40.470065   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:31:40.495732   38858 provision.go:86] duration metric: configureAuth took 433.401923ms
	I0229 18:31:40.495760   38858 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:31:40.495959   38858 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:31:40.495982   38858 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:31:40.495993   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetURL
	I0229 18:31:40.497274   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Using libvirt version 6000000
	I0229 18:31:40.499930   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.500277   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.500305   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.500435   38858 main.go:141] libmachine: Docker is up and running!
	I0229 18:31:40.500451   38858 main.go:141] libmachine: Reticulating splines...
	I0229 18:31:40.500459   38858 client.go:171] LocalClient.Create took 25.480168705s
	I0229 18:31:40.500484   38858 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-907979" took 25.48025771s
	I0229 18:31:40.500496   38858 start.go:300] post-start starting for "kubernetes-upgrade-907979" (driver="kvm2")
	I0229 18:31:40.500512   38858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:31:40.500528   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:40.500742   38858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:31:40.500763   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.502985   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.503334   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.503363   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.503452   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:40.503634   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.503838   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:40.504007   38858 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:31:40.585313   38858 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:31:40.590094   38858 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:31:40.590118   38858 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:31:40.590175   38858 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:31:40.590260   38858 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:31:40.590376   38858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:31:40.600213   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:31:40.627470   38858 start.go:303] post-start completed in 126.956762ms
	I0229 18:31:40.627525   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetConfigRaw
	I0229 18:31:40.628099   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:31:40.630786   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.631136   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.631165   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.631414   38858 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/config.json ...
	I0229 18:31:40.631652   38858 start.go:128] duration metric: createHost completed in 25.635869465s
	I0229 18:31:40.631681   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.633905   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.634210   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.634249   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.634383   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:40.634581   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.634730   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.634880   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:40.635040   38858 main.go:141] libmachine: Using SSH client type: native
	I0229 18:31:40.635254   38858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:31:40.635266   38858 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:31:40.735523   38858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709231500.717329555
	
	I0229 18:31:40.735541   38858 fix.go:206] guest clock: 1709231500.717329555
	I0229 18:31:40.735564   38858 fix.go:219] Guest: 2024-02-29 18:31:40.717329555 +0000 UTC Remote: 2024-02-29 18:31:40.631670706 +0000 UTC m=+47.145479016 (delta=85.658849ms)
	I0229 18:31:40.735601   38858 fix.go:190] guest clock delta is within tolerance: 85.658849ms
	I0229 18:31:40.735606   38858 start.go:83] releasing machines lock for "kubernetes-upgrade-907979", held for 25.739993292s
	I0229 18:31:40.735631   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:40.735896   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:31:40.738717   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.739092   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.739130   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.739309   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:40.739791   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:40.739977   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:31:40.740064   38858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:31:40.740126   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.740196   38858 ssh_runner.go:195] Run: cat /version.json
	I0229 18:31:40.740228   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:31:40.742853   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.743037   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.743170   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.743193   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.743342   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:40.743486   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:40.743499   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.743521   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:40.743651   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:40.743723   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:31:40.743836   38858 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:31:40.743893   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:31:40.744028   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:31:40.744172   38858 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:31:40.820719   38858 ssh_runner.go:195] Run: systemctl --version
	I0229 18:31:40.849688   38858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:31:40.859961   38858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:31:40.860047   38858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:31:40.884916   38858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:31:40.884951   38858 start.go:475] detecting cgroup driver to use...
	I0229 18:31:40.885032   38858 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:31:40.924499   38858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:31:40.940867   38858 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:31:40.940943   38858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:31:40.957295   38858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:31:40.973554   38858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:31:41.105503   38858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:31:41.284479   38858 docker.go:233] disabling docker service ...
	I0229 18:31:41.284553   38858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:31:41.301596   38858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:31:41.315561   38858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:31:41.439452   38858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:31:41.565482   38858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:31:41.581244   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:31:41.602633   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:31:41.614110   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:31:41.625902   38858 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:31:41.625966   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:31:41.637277   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:31:41.648403   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:31:41.659909   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:31:41.672354   38858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:31:41.685066   38858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:31:41.696995   38858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:31:41.707871   38858 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:31:41.707932   38858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:31:41.724168   38858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:31:41.739953   38858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:31:41.885233   38858 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:31:41.924725   38858 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:31:41.924794   38858 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:31:41.930630   38858 retry.go:31] will retry after 693.165783ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:31:42.624219   38858 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:31:42.630053   38858 start.go:543] Will wait 60s for crictl version
	I0229 18:31:42.630115   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:42.635031   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:31:42.674494   38858 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:31:42.674578   38858 ssh_runner.go:195] Run: containerd --version
	I0229 18:31:42.704231   38858 ssh_runner.go:195] Run: containerd --version
	I0229 18:31:42.743841   38858 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 18:31:42.745022   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:31:42.747966   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:42.748354   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:31:31 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:31:42.748388   38858 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:31:42.748629   38858 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:31:42.754741   38858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:31:42.773324   38858 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:31:42.773389   38858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:31:42.818246   38858 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:31:42.818329   38858 ssh_runner.go:195] Run: which lz4
	I0229 18:31:42.823182   38858 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:31:42.828153   38858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:31:42.828189   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 18:31:44.778967   38858 containerd.go:548] Took 1.955809 seconds to copy over tarball
	I0229 18:31:44.779046   38858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:31:47.464132   38858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.685051723s)
	I0229 18:31:47.464160   38858 containerd.go:555] Took 2.685168 seconds to extract the tarball
	I0229 18:31:47.464171   38858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:31:47.509480   38858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:31:47.636993   38858 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:31:47.670224   38858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:31:47.707832   38858 retry.go:31] will retry after 306.42112ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:31:47Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:31:48.015174   38858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:31:48.064399   38858 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:31:48.064425   38858 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:31:48.064479   38858 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:31:48.064537   38858 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:31:48.064539   38858 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:31:48.064584   38858 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:31:48.064697   38858 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:31:48.064713   38858 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:31:48.064545   38858 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:31:48.064755   38858 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:31:48.066037   38858 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:31:48.066049   38858 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:31:48.066038   38858 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:31:48.066046   38858 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:31:48.066037   38858 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:31:48.066336   38858 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:31:48.066425   38858 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:31:48.066462   38858 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:31:48.335572   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 18:31:48.335637   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.386204   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 18:31:48.386265   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.407733   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 18:31:48.407794   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.418683   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 18:31:48.418756   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.434997   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 18:31:48.435079   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.460245   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 18:31:48.460318   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.473599   38858 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 18:31:48.473688   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:48.630238   38858 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:31:48.656961   38858 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:31:48.657017   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.413922   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.027631847s)
	I0229 18:31:49.414020   38858 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:31:49.414059   38858 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:31:49.414109   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.641728   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.233912414s)
	I0229 18:31:49.641828   38858 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:31:49.641865   38858 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:31:49.641913   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.642330   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.22355498s)
	I0229 18:31:49.642418   38858 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:31:49.642459   38858 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:31:49.642523   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.642894   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.207794378s)
	I0229 18:31:49.642951   38858 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:31:49.642973   38858 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:31:49.643024   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.643448   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.183109933s)
	I0229 18:31:49.643502   38858 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:31:49.643525   38858 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:31:49.643559   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.663538   38858 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.189817719s)
	I0229 18:31:49.663596   38858 ssh_runner.go:235] Completed: which crictl: (1.006555751s)
	I0229 18:31:49.663650   38858 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:31:49.663680   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:31:49.663692   38858 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:31:49.663739   38858 ssh_runner.go:195] Run: which crictl
	I0229 18:31:49.663757   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:31:49.667082   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:31:49.667182   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:31:49.677878   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:31:49.677921   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:31:49.774136   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:31:49.774285   38858 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:31:49.774373   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:31:49.812117   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:31:49.820006   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:31:49.825653   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:31:49.827982   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:31:49.860586   38858 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:31:49.936695   38858 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 18:31:49.936781   38858 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:31:50.178963   38858 cache_images.go:92] LoadImages completed in 2.114520608s
	W0229 18:31:50.179032   38858 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0229 18:31:50.179106   38858 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:31:50.232343   38858 cni.go:84] Creating CNI manager for ""
	I0229 18:31:50.232374   38858 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:31:50.232397   38858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:31:50.232421   38858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.115 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907979 NodeName:kubernetes-upgrade-907979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:31:50.232600   38858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-907979"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-907979
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.115:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:31:50.232719   38858 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-907979 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:31:50.232795   38858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:31:50.244938   38858 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:31:50.245020   38858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:31:50.256107   38858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (447 bytes)
	I0229 18:31:50.279359   38858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:31:50.301065   38858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0229 18:31:50.323361   38858 ssh_runner.go:195] Run: grep 192.168.50.115	control-plane.minikube.internal$ /etc/hosts
	I0229 18:31:50.328362   38858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:31:50.344271   38858 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979 for IP: 192.168.50.115
	I0229 18:31:50.344314   38858 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.344518   38858 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:31:50.344570   38858 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:31:50.344625   38858 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key
	I0229 18:31:50.344643   38858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt with IP's: []
	I0229 18:31:50.417969   38858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt ...
	I0229 18:31:50.417999   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt: {Name:mk9d5d98828f48ca83837fc2d133d31008229c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.418189   38858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key ...
	I0229 18:31:50.418206   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key: {Name:mk5d7a5ae6275032e84a2437667bf570eb1e5aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.418335   38858 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key.17cdc941
	I0229 18:31:50.418353   38858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt.17cdc941 with IP's: [192.168.50.115 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:31:50.488374   38858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt.17cdc941 ...
	I0229 18:31:50.488407   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt.17cdc941: {Name:mk8fb423ef564ae3418ae1cccb527c2813b92791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.488577   38858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key.17cdc941 ...
	I0229 18:31:50.488593   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key.17cdc941: {Name:mkad7d3b4e0cf816f3d4279137e362cc81586248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.488679   38858 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt.17cdc941 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt
	I0229 18:31:50.488766   38858 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key.17cdc941 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key
	I0229 18:31:50.488844   38858 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key
	I0229 18:31:50.488864   38858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.crt with IP's: []
	I0229 18:31:50.627577   38858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.crt ...
	I0229 18:31:50.627607   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.crt: {Name:mk74167ecb0a78da799d04662629156d4c6929c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.627788   38858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key ...
	I0229 18:31:50.627806   38858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key: {Name:mk398642fd0e635098010c87d7425ec364069701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:50.628011   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:31:50.628068   38858 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:31:50.628090   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:31:50.628138   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:31:50.628176   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:31:50.628206   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:31:50.628260   38858 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:31:50.628873   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:31:50.664379   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:31:50.691478   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:31:50.727410   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:31:50.761679   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:31:50.795780   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:31:50.831708   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:31:50.866258   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:31:50.895392   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:31:50.927312   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:31:50.955393   38858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:31:50.988431   38858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:31:51.016362   38858 ssh_runner.go:195] Run: openssl version
	I0229 18:31:51.025784   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:31:51.039175   38858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:31:51.045046   38858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:31:51.045105   38858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:31:51.052098   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:31:51.064908   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:31:51.077585   38858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:31:51.083181   38858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:31:51.083242   38858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:31:51.090716   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:31:51.103171   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:31:51.115080   38858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:31:51.122423   38858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:31:51.122494   38858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:31:51.131538   38858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:31:51.144964   38858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:31:51.151481   38858 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:31:51.151553   38858 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.115 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:31:51.151646   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:31:51.151705   38858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:31:51.200730   38858 cri.go:89] found id: ""
	I0229 18:31:51.200806   38858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:31:51.216240   38858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:31:51.227254   38858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:31:51.238109   38858 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:31:51.238159   38858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:31:51.603513   38858 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:33:50.124936   38858 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:33:50.125070   38858 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:33:50.126438   38858 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:33:50.126498   38858 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:33:50.126609   38858 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:33:50.126715   38858 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:33:50.126798   38858 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:33:50.126888   38858 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:33:50.126971   38858 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:33:50.127011   38858 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:33:50.127075   38858 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:33:50.128831   38858 out.go:204]   - Generating certificates and keys ...
	I0229 18:33:50.128915   38858 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:33:50.128999   38858 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:33:50.129083   38858 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:33:50.129158   38858 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:33:50.129226   38858 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:33:50.129276   38858 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:33:50.129342   38858 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:33:50.129456   38858 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	I0229 18:33:50.129502   38858 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:33:50.129611   38858 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	I0229 18:33:50.129674   38858 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:33:50.129746   38858 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:33:50.129791   38858 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:33:50.129853   38858 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:33:50.129932   38858 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:33:50.130013   38858 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:33:50.130078   38858 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:33:50.130126   38858 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:33:50.130182   38858 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:33:50.131878   38858 out.go:204]   - Booting up control plane ...
	I0229 18:33:50.131972   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:33:50.132042   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:33:50.132113   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:33:50.132207   38858 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:33:50.132399   38858 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:33:50.132479   38858 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:33:50.132551   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:50.132708   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:50.132774   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:50.132951   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:50.133028   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:50.133171   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:50.133237   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:50.133400   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:50.133464   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:50.133626   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:50.133634   38858 kubeadm.go:322] 
	I0229 18:33:50.133669   38858 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:33:50.133703   38858 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:33:50.133709   38858 kubeadm.go:322] 
	I0229 18:33:50.133737   38858 kubeadm.go:322] This error is likely caused by:
	I0229 18:33:50.133765   38858 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:33:50.133855   38858 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:33:50.133864   38858 kubeadm.go:322] 
	I0229 18:33:50.133947   38858 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:33:50.133974   38858 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:33:50.134000   38858 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:33:50.134007   38858 kubeadm.go:322] 
	I0229 18:33:50.134091   38858 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:33:50.134182   38858 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:33:50.134267   38858 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:33:50.134322   38858 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:33:50.134423   38858 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:33:50.134507   38858 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 18:33:50.134620   38858 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907979 localhost] and IPs [192.168.50.115 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:33:50.134681   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 18:33:50.607547   38858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:33:50.624629   38858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:33:50.637596   38858 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:33:50.637652   38858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:33:50.701186   38858 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:33:50.701286   38858 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:33:50.842490   38858 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:33:50.842623   38858 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:33:50.842746   38858 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:33:51.052583   38858 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:33:51.052958   38858 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:33:51.060798   38858 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:33:51.193769   38858 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:33:51.195729   38858 out.go:204]   - Generating certificates and keys ...
	I0229 18:33:51.195827   38858 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:33:51.195932   38858 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:33:51.196060   38858 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:33:51.196144   38858 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:33:51.196243   38858 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:33:51.196326   38858 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:33:51.196435   38858 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:33:51.196540   38858 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:33:51.196667   38858 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:33:51.196793   38858 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:33:51.196845   38858 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:33:51.196924   38858 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:33:51.288004   38858 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:33:51.449807   38858 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:33:51.538141   38858 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:33:51.748404   38858 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:33:51.749266   38858 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:33:51.750962   38858 out.go:204]   - Booting up control plane ...
	I0229 18:33:51.751070   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:33:51.759889   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:33:51.764677   38858 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:33:51.765913   38858 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:33:51.768222   38858 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:34:31.767812   38858 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:34:31.768084   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:34:31.768308   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:34:36.768786   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:34:36.769045   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:34:46.770312   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:34:46.770523   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:06.771907   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:06.772166   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:46.770710   38858 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:46.771014   38858 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:46.771034   38858 kubeadm.go:322] 
	I0229 18:35:46.771068   38858 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:35:46.771237   38858 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:35:46.771249   38858 kubeadm.go:322] 
	I0229 18:35:46.771277   38858 kubeadm.go:322] This error is likely caused by:
	I0229 18:35:46.771306   38858 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:35:46.771418   38858 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:35:46.771432   38858 kubeadm.go:322] 
	I0229 18:35:46.771553   38858 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:35:46.771607   38858 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:35:46.771653   38858 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:35:46.771663   38858 kubeadm.go:322] 
	I0229 18:35:46.771818   38858 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:35:46.771937   38858 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:35:46.772056   38858 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:35:46.772122   38858 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:35:46.772244   38858 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:35:46.772301   38858 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:35:46.773940   38858 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:35:46.774092   38858 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:35:46.774175   38858 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:35:46.774238   38858 kubeadm.go:406] StartCluster complete in 3m55.622691029s
	I0229 18:35:46.774274   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:35:46.774332   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:35:46.844859   38858 cri.go:89] found id: ""
	I0229 18:35:46.844881   38858 logs.go:276] 0 containers: []
	W0229 18:35:46.844889   38858 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:35:46.844894   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:35:46.844956   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:35:46.904287   38858 cri.go:89] found id: ""
	I0229 18:35:46.904313   38858 logs.go:276] 0 containers: []
	W0229 18:35:46.904322   38858 logs.go:278] No container was found matching "etcd"
	I0229 18:35:46.904328   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:35:46.904389   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:35:46.950003   38858 cri.go:89] found id: ""
	I0229 18:35:46.950031   38858 logs.go:276] 0 containers: []
	W0229 18:35:46.950041   38858 logs.go:278] No container was found matching "coredns"
	I0229 18:35:46.950047   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:35:46.950104   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:35:47.000199   38858 cri.go:89] found id: ""
	I0229 18:35:47.000233   38858 logs.go:276] 0 containers: []
	W0229 18:35:47.000243   38858 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:35:47.000251   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:35:47.000305   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:35:47.040689   38858 cri.go:89] found id: ""
	I0229 18:35:47.040713   38858 logs.go:276] 0 containers: []
	W0229 18:35:47.040721   38858 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:35:47.040726   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:35:47.040776   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:35:47.082414   38858 cri.go:89] found id: ""
	I0229 18:35:47.082444   38858 logs.go:276] 0 containers: []
	W0229 18:35:47.082454   38858 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:35:47.082462   38858 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:35:47.082520   38858 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:35:47.118961   38858 cri.go:89] found id: ""
	I0229 18:35:47.118992   38858 logs.go:276] 0 containers: []
	W0229 18:35:47.119003   38858 logs.go:278] No container was found matching "kindnet"
	I0229 18:35:47.119014   38858 logs.go:123] Gathering logs for dmesg ...
	I0229 18:35:47.119033   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:35:47.134620   38858 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:35:47.134656   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:35:47.262910   38858 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:35:47.262941   38858 logs.go:123] Gathering logs for containerd ...
	I0229 18:35:47.262962   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:35:47.299919   38858 logs.go:123] Gathering logs for container status ...
	I0229 18:35:47.299949   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:35:47.352847   38858 logs.go:123] Gathering logs for kubelet ...
	I0229 18:35:47.352878   38858 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:35:47.408044   38858 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:35:47.408107   38858 out.go:239] * 
	* 
	W0229 18:35:47.408177   38858 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:35:47.408205   38858 out.go:239] * 
	* 
	W0229 18:35:47.409127   38858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:35:47.412600   38858 out.go:177] 
	W0229 18:35:47.413921   38858 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:35:47.413979   38858 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:35:47.414058   38858 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:35:47.415752   38858 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-907979
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-907979: (1.333349732s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-907979 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-907979 status --format={{.Host}}: exit status 7 (84.303586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.730884215s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-907979 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (94.037899ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-907979] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-907979
	    minikube start -p kubernetes-upgrade-907979 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9079792 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-907979 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-907979 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (22.044900764s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 18:36:51.81847802 +0000 UTC m=+3530.322810516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-907979 -n kubernetes-upgrade-907979
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-907979 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-907979 logs -n 25: (1.268175405s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-387000 sudo                                 | cilium-387000             | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-387000 sudo find                            | cilium-387000             | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-387000 sudo crio                            | cilium-387000             | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-387000                                      | cilium-387000             | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC | 29 Feb 24 18:31 UTC |
	| start   | -p stopped-upgrade-475131                             | minikube                  | jenkins | v1.26.0 | 29 Feb 24 18:31 UTC | 29 Feb 24 18:32 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --vm-driver=kvm2                                      |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| ssh     | cert-options-153536 ssh                               | cert-options-153536       | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC | 29 Feb 24 18:31 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-153536 -- sudo                        | cert-options-153536       | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC | 29 Feb 24 18:31 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-153536                                | cert-options-153536       | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC | 29 Feb 24 18:31 UTC |
	| start   | -p old-k8s-version-561577                             | old-k8s-version-561577    | jenkins | v1.32.0 | 29 Feb 24 18:31 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-475131 stop                           | minikube                  | jenkins | v1.26.0 | 29 Feb 24 18:32 UTC | 29 Feb 24 18:32 UTC |
	| start   | -p cert-expiration-829233                             | cert-expiration-829233    | jenkins | v1.32.0 | 29 Feb 24 18:32 UTC | 29 Feb 24 18:32 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-475131                             | stopped-upgrade-475131    | jenkins | v1.32.0 | 29 Feb 24 18:32 UTC | 29 Feb 24 18:34 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-829233                             | cert-expiration-829233    | jenkins | v1.32.0 | 29 Feb 24 18:32 UTC | 29 Feb 24 18:32 UTC |
	| start   | -p no-preload-644659                                  | no-preload-644659         | jenkins | v1.32.0 | 29 Feb 24 18:32 UTC | 29 Feb 24 18:36 UTC |
	|         | --memory=2200 --alsologtostderr                       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                     |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-475131                             | stopped-upgrade-475131    | jenkins | v1.32.0 | 29 Feb 24 18:34 UTC | 29 Feb 24 18:34 UTC |
	| start   | -p embed-certs-596503                                 | embed-certs-596503        | jenkins | v1.32.0 | 29 Feb 24 18:34 UTC | 29 Feb 24 18:35 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-907979                          | kubernetes-upgrade-907979 | jenkins | v1.32.0 | 29 Feb 24 18:35 UTC | 29 Feb 24 18:35 UTC |
	| start   | -p kubernetes-upgrade-907979                          | kubernetes-upgrade-907979 | jenkins | v1.32.0 | 29 Feb 24 18:35 UTC | 29 Feb 24 18:36 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-596503           | embed-certs-596503        | jenkins | v1.32.0 | 29 Feb 24 18:35 UTC | 29 Feb 24 18:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-596503                                 | embed-certs-596503        | jenkins | v1.32.0 | 29 Feb 24 18:35 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-907979                          | kubernetes-upgrade-907979 | jenkins | v1.32.0 | 29 Feb 24 18:36 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-907979                          | kubernetes-upgrade-907979 | jenkins | v1.32.0 | 29 Feb 24 18:36 UTC | 29 Feb 24 18:36 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-644659            | no-preload-644659         | jenkins | v1.32.0 | 29 Feb 24 18:36 UTC | 29 Feb 24 18:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-644659                                  | no-preload-644659         | jenkins | v1.32.0 | 29 Feb 24 18:36 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-561577       | old-k8s-version-561577    | jenkins | v1.32.0 | 29 Feb 24 18:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:36:29.822588   44016 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:36:29.822734   44016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:36:29.822746   44016 out.go:304] Setting ErrFile to fd 2...
	I0229 18:36:29.822750   44016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:36:29.822984   44016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:36:29.823529   44016 out.go:298] Setting JSON to false
	I0229 18:36:29.824449   44016 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4731,"bootTime":1709227059,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:36:29.824507   44016 start.go:139] virtualization: kvm guest
	I0229 18:36:29.826615   44016 out.go:177] * [kubernetes-upgrade-907979] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:36:29.827996   44016 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:36:29.827999   44016 notify.go:220] Checking for updates...
	I0229 18:36:29.829410   44016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:36:29.830883   44016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:36:29.832378   44016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:36:29.833785   44016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:36:29.835137   44016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:36:29.836768   44016 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 18:36:29.837153   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:29.837202   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:29.852690   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0229 18:36:29.853129   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:29.853657   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:29.853682   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:29.854064   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:29.854219   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:29.854474   44016 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:36:29.854783   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:29.854819   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:29.869036   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0229 18:36:29.869424   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:29.869830   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:29.869852   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:29.870206   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:29.870378   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:29.904186   44016 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:36:29.905601   44016 start.go:299] selected driver: kvm2
	I0229 18:36:29.905618   44016 start.go:903] validating driver "kvm2" against &{Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:36:29.905706   44016 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:36:29.906378   44016 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:36:29.906458   44016 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:36:29.921059   44016 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:36:29.921481   44016 cni.go:84] Creating CNI manager for ""
	I0229 18:36:29.921500   44016 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:36:29.921513   44016 start_flags.go:323] config:
	{Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-907979
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:36:29.921723   44016 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:36:29.924317   44016 out.go:177] * Starting control plane node kubernetes-upgrade-907979 in cluster kubernetes-upgrade-907979
	I0229 18:36:29.926493   44016 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 18:36:29.926529   44016 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0229 18:36:29.926558   44016 cache.go:56] Caching tarball of preloaded images
	I0229 18:36:29.926635   44016 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:36:29.926649   44016 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0229 18:36:29.926753   44016 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/config.json ...
	I0229 18:36:29.926969   44016 start.go:365] acquiring machines lock for kubernetes-upgrade-907979: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:36:29.927011   44016 start.go:369] acquired machines lock for "kubernetes-upgrade-907979" in 23.668µs
	I0229 18:36:29.927030   44016 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:36:29.927037   44016 fix.go:54] fixHost starting: 
	I0229 18:36:29.927381   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:29.927419   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:29.941101   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40525
	I0229 18:36:29.941513   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:29.941972   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:29.941990   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:29.942288   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:29.942477   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:29.942635   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetState
	I0229 18:36:29.944307   44016 fix.go:102] recreateIfNeeded on kubernetes-upgrade-907979: state=Running err=<nil>
	W0229 18:36:29.944327   44016 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:36:29.946124   44016 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-907979" VM ...
	I0229 18:36:29.947491   44016 machine.go:88] provisioning docker machine ...
	I0229 18:36:29.947514   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:29.947731   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:36:29.947852   44016 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907979"
	I0229 18:36:29.947871   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:36:29.947996   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:29.950489   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:29.950930   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:29.950957   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:29.951090   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:29.951255   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:29.951395   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:29.951503   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:29.951706   44016 main.go:141] libmachine: Using SSH client type: native
	I0229 18:36:29.951926   44016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:36:29.951950   44016 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907979 && echo "kubernetes-upgrade-907979" | sudo tee /etc/hostname
	I0229 18:36:30.094482   44016 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907979
	
	I0229 18:36:30.094525   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.097593   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.097995   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.098038   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.098239   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.098447   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.098632   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.098776   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.098982   44016 main.go:141] libmachine: Using SSH client type: native
	I0229 18:36:30.099190   44016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:36:30.099215   44016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907979/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:36:30.224314   44016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:36:30.224345   44016 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:36:30.224365   44016 buildroot.go:174] setting up certificates
	I0229 18:36:30.224374   44016 provision.go:83] configureAuth start
	I0229 18:36:30.224388   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetMachineName
	I0229 18:36:30.224681   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:36:30.227536   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.227861   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.227888   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.227971   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.230013   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.230342   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.230377   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.230586   44016 provision.go:138] copyHostCerts
	I0229 18:36:30.230649   44016 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:36:30.230671   44016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:36:30.230741   44016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:36:30.230845   44016 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:36:30.230854   44016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:36:30.230874   44016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:36:30.230980   44016 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:36:30.230990   44016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:36:30.231008   44016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:36:30.231064   44016 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907979 san=[192.168.50.115 192.168.50.115 localhost 127.0.0.1 minikube kubernetes-upgrade-907979]
	I0229 18:36:30.398283   44016 provision.go:172] copyRemoteCerts
	I0229 18:36:30.398340   44016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:36:30.398362   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.400847   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.401144   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.401169   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.401315   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.401505   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.401662   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.401805   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:30.491260   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:36:30.520945   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:36:30.549986   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:36:30.584952   44016 provision.go:86] duration metric: configureAuth took 360.566599ms
	I0229 18:36:30.584979   44016 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:36:30.585144   44016 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 18:36:30.585157   44016 machine.go:91] provisioned docker machine in 637.649876ms
	I0229 18:36:30.585167   44016 start.go:300] post-start starting for "kubernetes-upgrade-907979" (driver="kvm2")
	I0229 18:36:30.585181   44016 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:36:30.585215   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:30.585531   44016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:36:30.585557   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.587867   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.588217   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.588252   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.588343   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.588543   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.588694   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.588837   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:30.680713   44016 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:36:30.686336   44016 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:36:30.686358   44016 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:36:30.686412   44016 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:36:30.686503   44016 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:36:30.686622   44016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:36:30.699742   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:36:30.730885   44016 start.go:303] post-start completed in 145.705968ms
	I0229 18:36:30.730905   44016 fix.go:56] fixHost completed within 803.867975ms
	I0229 18:36:30.730929   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.733472   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.733865   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.733895   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.734096   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.734297   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.734490   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.734652   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.734839   44016 main.go:141] libmachine: Using SSH client type: native
	I0229 18:36:30.734992   44016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.115 22 <nil> <nil>}
	I0229 18:36:30.735003   44016 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:36:30.853926   44016 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709231790.831079579
	
	I0229 18:36:30.853949   44016 fix.go:206] guest clock: 1709231790.831079579
	I0229 18:36:30.853959   44016 fix.go:219] Guest: 2024-02-29 18:36:30.831079579 +0000 UTC Remote: 2024-02-29 18:36:30.730909451 +0000 UTC m=+0.954463135 (delta=100.170128ms)
	I0229 18:36:30.853983   44016 fix.go:190] guest clock delta is within tolerance: 100.170128ms
	I0229 18:36:30.853995   44016 start.go:83] releasing machines lock for "kubernetes-upgrade-907979", held for 926.966106ms
	I0229 18:36:30.854045   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:30.854340   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:36:30.856854   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.857240   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.857267   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.857509   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:30.858085   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:30.858288   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:30.858369   44016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:36:30.858410   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.858571   44016 ssh_runner.go:195] Run: cat /version.json
	I0229 18:36:30.858598   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:30.861442   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.861602   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.861875   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.861900   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.862079   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:30.862109   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:30.862110   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.862282   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.862330   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:30.862449   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.862494   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:30.862734   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:30.862718   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:30.862881   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:30.977484   44016 ssh_runner.go:195] Run: systemctl --version
	I0229 18:36:30.984303   44016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:36:30.990683   44016 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:36:30.990758   44016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:36:31.001399   44016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:36:31.001423   44016 start.go:475] detecting cgroup driver to use...
	I0229 18:36:31.001487   44016 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:36:31.016403   44016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:36:31.030536   44016 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:36:31.030587   44016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:36:31.045067   44016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:36:31.059967   44016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:36:31.191865   44016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:36:31.349214   44016 docker.go:233] disabling docker service ...
	I0229 18:36:31.349319   44016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:36:31.372992   44016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:36:31.389967   44016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:36:31.535233   44016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:36:31.715946   44016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:36:31.737519   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:36:31.769411   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:36:31.785971   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:36:31.808433   44016 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:36:31.808496   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:36:31.822745   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:36:31.838481   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:36:31.859487   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:36:31.873201   44016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:36:31.885712   44016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:36:31.899351   44016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:36:31.910459   44016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:36:31.921602   44016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:36:32.072701   44016 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:36:32.104463   44016 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:36:32.104531   44016 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:36:32.110237   44016 retry.go:31] will retry after 765.368785ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:36:32.876167   44016 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:36:32.883109   44016 start.go:543] Will wait 60s for crictl version
	I0229 18:36:32.883151   44016 ssh_runner.go:195] Run: which crictl
	I0229 18:36:32.888069   44016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:36:32.934922   44016 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:36:32.934985   44016 ssh_runner.go:195] Run: containerd --version
	I0229 18:36:32.965899   44016 ssh_runner.go:195] Run: containerd --version
	I0229 18:36:32.998028   44016 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.7.11 ...
	I0229 18:36:32.999328   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetIP
	I0229 18:36:33.002316   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:33.002735   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:33.002768   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:33.002975   44016 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:36:33.009054   44016 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 18:36:33.009108   44016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:36:33.063541   44016 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:36:33.063570   44016 containerd.go:519] Images already preloaded, skipping extraction
	I0229 18:36:33.063633   44016 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:36:33.120389   44016 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:36:33.120415   44016 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:36:33.120476   44016 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:36:33.171054   44016 cni.go:84] Creating CNI manager for ""
	I0229 18:36:33.171085   44016 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:36:33.171110   44016 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:36:33.171135   44016 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.115 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907979 NodeName:kubernetes-upgrade-907979 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:36:33.171302   44016 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-907979"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:36:33.171387   44016 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-907979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:36:33.171457   44016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:36:33.185937   44016 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:36:33.186019   44016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:36:33.199631   44016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (404 bytes)
	I0229 18:36:33.241241   44016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:36:33.272819   44016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0229 18:36:33.293064   44016 ssh_runner.go:195] Run: grep 192.168.50.115	control-plane.minikube.internal$ /etc/hosts
	I0229 18:36:33.297998   44016 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979 for IP: 192.168.50.115
	I0229 18:36:33.298032   44016 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:36:33.298199   44016 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:36:33.298279   44016 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:36:33.298385   44016 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key
	I0229 18:36:33.298444   44016 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key.17cdc941
	I0229 18:36:33.298501   44016 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key
	I0229 18:36:33.298661   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:36:33.298692   44016 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:36:33.298703   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:36:33.298731   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:36:33.298764   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:36:33.298801   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:36:33.298865   44016 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:36:33.299423   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:36:33.328772   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:36:33.358073   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:36:33.391628   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:36:33.421900   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:36:33.454822   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:36:33.487621   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:36:33.522316   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:36:33.551006   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:36:33.579336   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:36:33.615486   44016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:36:33.644612   44016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:36:33.667902   44016 ssh_runner.go:195] Run: openssl version
	I0229 18:36:33.675445   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:36:33.688858   44016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:36:33.694216   44016 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:36:33.694276   44016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:36:33.701533   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:36:33.717355   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:36:33.736100   44016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:36:33.741700   44016 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:36:33.741756   44016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:36:33.749016   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:36:33.759849   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:36:33.773531   44016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:36:33.781212   44016 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:36:33.781275   44016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:36:33.788011   44016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:36:33.798863   44016 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:36:33.803796   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:36:33.810173   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:36:33.816817   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:36:33.825137   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:36:33.836703   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:36:33.844840   44016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:36:33.854734   44016 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-907979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-907979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:36:33.854835   44016 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:36:33.854889   44016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:36:33.912093   44016 cri.go:89] found id: "7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb"
	I0229 18:36:33.912114   44016 cri.go:89] found id: "dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837"
	I0229 18:36:33.912120   44016 cri.go:89] found id: "083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c"
	I0229 18:36:33.912124   44016 cri.go:89] found id: "15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c"
	I0229 18:36:33.912128   44016 cri.go:89] found id: ""
	I0229 18:36:33.912179   44016 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0229 18:36:33.938323   44016 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c","pid":1012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c/rootfs","created":"2024-02-29T18:36:20.276945667Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7ff527c560060b938e6f8376fc4ddabc"},"owner":"root"},{"ociVersion":"1.0.2-dev","i
d":"15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c","pid":1022,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c/rootfs","created":"2024-02-29T18:36:20.289270734Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a57e018ff431e6e012196ef34825ee14"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2786043252b1f2c738b57200a41c69049c88876c245a82c52840210e
da6788bc","pid":899,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc/rootfs","created":"2024-02-29T18:36:20.026636957Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-907979_a57e018ff431e6e012196ef34825ee14","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a57e018ff431e6e012196e
f34825ee14"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa","pid":937,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa/rootfs","created":"2024-02-29T18:36:20.098320936Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-907979_c29c292bc3bfa54b70acde7052369abc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-907979","io.kube
rnetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c29c292bc3bfa54b70acde7052369abc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb","pid":1089,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb/rootfs","created":"2024-02-29T18:36:20.503129474Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.29.0-rc.2","io.kubernetes.cri.sandbox-id":"724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"c29c292bc3
bfa54b70acde7052369abc"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a/rootfs","created":"2024-02-29T18:36:20.087958289Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-907979_e7688532cad5f44712a5a7efb453ee36","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-907979","io.kubernetes.c
ri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7688532cad5f44712a5a7efb453ee36"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837","pid":1071,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837/rootfs","created":"2024-02-29T18:36:20.45564415Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.10-0","io.kubernetes.cri.sandbox-id":"ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e7688532cad5f44712a5a7efb453ee36"},"owner":"root"},{"
ociVersion":"1.0.2-dev","id":"f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b","pid":930,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b/rootfs","created":"2024-02-29T18:36:20.08398346Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-907979_7ff527c560060b938e6f8376fc4ddabc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-907979","io.kubernetes.cri.sandbox-namespace":"k
ube-system","io.kubernetes.cri.sandbox-uid":"7ff527c560060b938e6f8376fc4ddabc"},"owner":"root"}]
	I0229 18:36:33.938524   44016 cri.go:126] list returned 8 containers
	I0229 18:36:33.938540   44016 cri.go:129] container: {ID:083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c Status:running}
	I0229 18:36:33.938573   44016 cri.go:135] skipping {083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c running}: state = "running", want "paused"
	I0229 18:36:33.938583   44016 cri.go:129] container: {ID:15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c Status:running}
	I0229 18:36:33.938591   44016 cri.go:135] skipping {15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c running}: state = "running", want "paused"
	I0229 18:36:33.938601   44016 cri.go:129] container: {ID:2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc Status:running}
	I0229 18:36:33.938608   44016 cri.go:131] skipping 2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc - not in ps
	I0229 18:36:33.938616   44016 cri.go:129] container: {ID:724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa Status:running}
	I0229 18:36:33.938622   44016 cri.go:131] skipping 724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa - not in ps
	I0229 18:36:33.938627   44016 cri.go:129] container: {ID:7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb Status:running}
	I0229 18:36:33.938639   44016 cri.go:135] skipping {7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb running}: state = "running", want "paused"
	I0229 18:36:33.938648   44016 cri.go:129] container: {ID:ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a Status:running}
	I0229 18:36:33.938656   44016 cri.go:131] skipping ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a - not in ps
	I0229 18:36:33.938665   44016 cri.go:129] container: {ID:dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837 Status:running}
	I0229 18:36:33.938676   44016 cri.go:135] skipping {dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837 running}: state = "running", want "paused"
	I0229 18:36:33.938684   44016 cri.go:129] container: {ID:f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b Status:running}
	I0229 18:36:33.938691   44016 cri.go:131] skipping f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b - not in ps
	I0229 18:36:33.938744   44016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:36:33.950579   44016 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:36:33.950602   44016 kubeadm.go:636] restartCluster start
	I0229 18:36:33.950657   44016 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:36:33.961113   44016 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:36:33.962383   44016 kubeconfig.go:92] found "kubernetes-upgrade-907979" server: "https://192.168.50.115:8443"
	I0229 18:36:33.964588   44016 kapi.go:59] client config for kubernetes-upgrade-907979: &rest.Config{Host:"https://192.168.50.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:36:33.965270   44016 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:36:33.976579   44016 api_server.go:166] Checking apiserver status ...
	I0229 18:36:33.976630   44016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:36:33.990718   44016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1012/cgroup
	W0229 18:36:34.001132   44016 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1012/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:36:34.001174   44016 ssh_runner.go:195] Run: ls
	I0229 18:36:34.006176   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:34.010298   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 200:
	ok
	I0229 18:36:34.022153   44016 system_pods.go:86] 5 kube-system pods found
	I0229 18:36:34.022184   44016 system_pods.go:89] "etcd-kubernetes-upgrade-907979" [5751347f-8fc4-4a06-a86b-03bfe628b494] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:36:34.022190   44016 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-907979" [97632c94-e301-42aa-958d-3369f1e4234d] Pending
	I0229 18:36:34.022203   44016 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-907979" [cf4b0f1f-7ddb-4d8b-babb-5df6d6528a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:36:34.022214   44016 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-907979" [31b91f42-e0b0-4cf2-a0a3-4163141ffb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:36:34.022231   44016 system_pods.go:89] "storage-provisioner" [faa4b55b-a0a0-43c1-993b-eb889ee36f8c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:36:34.022245   44016 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-apiserver, kube-proxy
	I0229 18:36:34.022256   44016 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:36:34.022263   44016 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 18:36:34.022300   44016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:36:34.066100   44016 cri.go:89] found id: "7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb"
	I0229 18:36:34.066122   44016 cri.go:89] found id: "dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837"
	I0229 18:36:34.066126   44016 cri.go:89] found id: "083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c"
	I0229 18:36:34.066129   44016 cri.go:89] found id: "15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c"
	I0229 18:36:34.066131   44016 cri.go:89] found id: ""
	I0229 18:36:34.066136   44016 cri.go:234] Stopping containers: [7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837 083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c 15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c]
	I0229 18:36:34.066201   44016 ssh_runner.go:195] Run: which crictl
	I0229 18:36:34.070818   44016 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837 083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c 15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c
	I0229 18:36:40.424351   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:36:40.424578   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:40.424599   41533 kubeadm.go:322] 
	I0229 18:36:40.424651   41533 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:36:40.424699   41533 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:36:40.424708   41533 kubeadm.go:322] 
	I0229 18:36:40.424756   41533 kubeadm.go:322] This error is likely caused by:
	I0229 18:36:40.424790   41533 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:36:40.424950   41533 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:36:40.424967   41533 kubeadm.go:322] 
	I0229 18:36:40.425115   41533 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:36:40.425166   41533 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:36:40.425205   41533 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:36:40.425216   41533 kubeadm.go:322] 
	I0229 18:36:40.425357   41533 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:36:40.425467   41533 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:36:40.425583   41533 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:36:40.425654   41533 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:36:40.425765   41533 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:36:40.425813   41533 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:36:40.426648   41533 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:36:40.426768   41533 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:36:40.426891   41533 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:36:40.426946   41533 kubeadm.go:406] StartCluster complete in 3m55.2800036s
	I0229 18:36:40.426993   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:36:40.427055   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:36:40.477588   41533 cri.go:89] found id: ""
	I0229 18:36:40.477611   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.477624   41533 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:36:40.477629   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:36:40.477681   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:36:40.515615   41533 cri.go:89] found id: ""
	I0229 18:36:40.515638   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.515653   41533 logs.go:278] No container was found matching "etcd"
	I0229 18:36:40.515661   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:36:40.515739   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:36:40.561137   41533 cri.go:89] found id: ""
	I0229 18:36:40.561171   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.561182   41533 logs.go:278] No container was found matching "coredns"
	I0229 18:36:40.561193   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:36:40.561249   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:36:40.608424   41533 cri.go:89] found id: ""
	I0229 18:36:40.608452   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.608461   41533 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:36:40.608467   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:36:40.608517   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:36:40.664528   41533 cri.go:89] found id: ""
	I0229 18:36:40.664557   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.664568   41533 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:36:40.664576   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:36:40.664630   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:36:40.707656   41533 cri.go:89] found id: ""
	I0229 18:36:40.707684   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.707696   41533 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:36:40.707706   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:36:40.707777   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:36:40.745670   41533 cri.go:89] found id: ""
	I0229 18:36:40.745700   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.745711   41533 logs.go:278] No container was found matching "kindnet"
	I0229 18:36:40.745724   41533 logs.go:123] Gathering logs for containerd ...
	I0229 18:36:40.745737   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:36:40.779255   41533 logs.go:123] Gathering logs for container status ...
	I0229 18:36:40.779284   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:36:40.825429   41533 logs.go:123] Gathering logs for kubelet ...
	I0229 18:36:40.825457   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:36:40.873415   41533 logs.go:123] Gathering logs for dmesg ...
	I0229 18:36:40.873452   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:36:40.890876   41533 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:36:40.890902   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:36:41.022194   41533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0229 18:36:41.022233   41533 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:36:41.022276   41533 out.go:239] * 
	W0229 18:36:41.022335   41533 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:36:41.022361   41533 out.go:239] * 
	W0229 18:36:41.023506   41533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:36:41.026891   41533 out.go:177] 
	W0229 18:36:41.028113   41533 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:36:41.028153   41533 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:36:41.028170   41533 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:36:41.029630   41533 out.go:177] 
	I0229 18:36:44.479578   44016 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837 083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c 15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c: (10.408718733s)
	I0229 18:36:44.479650   44016 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:36:44.515803   44016 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:36:44.527138   44016 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 29 18:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Feb 29 18:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Feb 29 18:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Feb 29 18:36 /etc/kubernetes/scheduler.conf
	
	I0229 18:36:44.527193   44016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0229 18:36:44.536879   44016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0229 18:36:44.546541   44016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0229 18:36:44.555775   44016 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:36:44.555814   44016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0229 18:36:44.565479   44016 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0229 18:36:44.575583   44016 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:36:44.575640   44016 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0229 18:36:44.585930   44016 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:36:44.595821   44016 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:36:44.595842   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:44.661686   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:46.077685   44016 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.41596321s)
	I0229 18:36:46.077717   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:46.300698   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:46.392678   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:46.496332   44016 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:36:46.496443   44016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:36:46.997256   44016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:36:47.021412   44016 api_server.go:72] duration metric: took 525.07789ms to wait for apiserver process to appear ...
	I0229 18:36:47.021440   44016 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:36:47.021461   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:47.021928   44016 api_server.go:269] stopped: https://192.168.50.115:8443/healthz: Get "https://192.168.50.115:8443/healthz": dial tcp 192.168.50.115:8443: connect: connection refused
	I0229 18:36:47.521497   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:49.158990   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:36:49.159025   44016 api_server.go:103] status: https://192.168.50.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:36:49.159040   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:49.184911   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:36:49.184947   44016 api_server.go:103] status: https://192.168.50.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:36:49.522441   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:49.529130   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:36:49.529167   44016 api_server.go:103] status: https://192.168.50.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:36:50.022406   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:50.026720   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:36:50.026743   44016 api_server.go:103] status: https://192.168.50.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:36:50.522389   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:50.527566   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 200:
	ok
	I0229 18:36:50.535308   44016 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:36:50.535330   44016 api_server.go:131] duration metric: took 3.513883658s to wait for apiserver health ...
	I0229 18:36:50.535339   44016 cni.go:84] Creating CNI manager for ""
	I0229 18:36:50.535345   44016 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:36:50.537069   44016 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:36:50.538290   44016 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:36:50.552780   44016 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:36:50.578254   44016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:36:50.587443   44016 system_pods.go:59] 5 kube-system pods found
	I0229 18:36:50.587474   44016 system_pods.go:61] "etcd-kubernetes-upgrade-907979" [5751347f-8fc4-4a06-a86b-03bfe628b494] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:36:50.587483   44016 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-907979" [97632c94-e301-42aa-958d-3369f1e4234d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:36:50.587493   44016 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-907979" [cf4b0f1f-7ddb-4d8b-babb-5df6d6528a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:36:50.587503   44016 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-907979" [31b91f42-e0b0-4cf2-a0a3-4163141ffb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:36:50.587516   44016 system_pods.go:61] "storage-provisioner" [faa4b55b-a0a0-43c1-993b-eb889ee36f8c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:36:50.587533   44016 system_pods.go:74] duration metric: took 9.249699ms to wait for pod list to return data ...
	I0229 18:36:50.587542   44016 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:36:50.590935   44016 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:36:50.590958   44016 node_conditions.go:123] node cpu capacity is 2
	I0229 18:36:50.590969   44016 node_conditions.go:105] duration metric: took 3.418353ms to run NodePressure ...
	I0229 18:36:50.590989   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:36:50.857868   44016 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:36:50.871298   44016 ops.go:34] apiserver oom_adj: -16
	I0229 18:36:50.871318   44016 kubeadm.go:640] restartCluster took 16.920709549s
	I0229 18:36:50.871329   44016 kubeadm.go:406] StartCluster complete in 17.016602473s
	I0229 18:36:50.871350   44016 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:36:50.871420   44016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:36:50.873145   44016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:36:50.873440   44016 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:36:50.873505   44016 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:36:50.873573   44016 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-907979"
	I0229 18:36:50.873594   44016 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-907979"
	W0229 18:36:50.873606   44016 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:36:50.873589   44016 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-907979"
	I0229 18:36:50.873681   44016 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0229 18:36:50.873681   44016 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-907979"
	I0229 18:36:50.873659   44016 host.go:66] Checking if "kubernetes-upgrade-907979" exists ...
	I0229 18:36:50.874219   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:50.874249   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:50.874325   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:50.874362   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:50.874930   44016 kapi.go:59] client config for kubernetes-upgrade-907979: &rest.Config{Host:"https://192.168.50.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:36:50.877809   44016 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-907979" context rescaled to 1 replicas
	I0229 18:36:50.877835   44016 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.115 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:36:50.879523   44016 out.go:177] * Verifying Kubernetes components...
	I0229 18:36:50.880717   44016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:36:50.889144   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0229 18:36:50.889585   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:50.890031   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:50.890053   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:50.890409   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:50.890845   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:50.890870   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:50.893395   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0229 18:36:50.893859   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:50.894308   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:50.894334   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:50.894725   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:50.894895   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetState
	I0229 18:36:50.898106   44016 kapi.go:59] client config for kubernetes-upgrade-907979: &rest.Config{Host:"https://192.168.50.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kubernetes-upgrade-907979/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:36:50.898422   44016 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-907979"
	W0229 18:36:50.898444   44016 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:36:50.898471   44016 host.go:66] Checking if "kubernetes-upgrade-907979" exists ...
	I0229 18:36:50.898887   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:50.898930   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:50.905875   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0229 18:36:50.906267   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:50.906738   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:50.906757   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:50.907104   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:50.907318   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetState
	I0229 18:36:50.908872   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:50.910788   44016 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:36:50.912369   44016 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:36:50.912387   44016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:36:50.912404   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:50.913548   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0229 18:36:50.913924   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:50.914412   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:50.914435   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:50.914770   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:50.915264   44016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:36:50.915284   44016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:36:50.915466   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:50.915998   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:50.916037   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:50.916195   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:50.916344   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:50.916484   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:50.916597   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:50.929044   44016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0229 18:36:50.929366   44016 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:36:50.929765   44016 main.go:141] libmachine: Using API Version  1
	I0229 18:36:50.929786   44016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:36:50.930046   44016 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:36:50.930194   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetState
	I0229 18:36:50.931421   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .DriverName
	I0229 18:36:50.931675   44016 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:36:50.931693   44016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:36:50.931705   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHHostname
	I0229 18:36:50.934106   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:50.934486   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:d4:e4", ip: ""} in network mk-kubernetes-upgrade-907979: {Iface:virbr1 ExpiryTime:2024-02-29 19:36:01 +0000 UTC Type:0 Mac:52:54:00:24:d4:e4 Iaid: IPaddr:192.168.50.115 Prefix:24 Hostname:kubernetes-upgrade-907979 Clientid:01:52:54:00:24:d4:e4}
	I0229 18:36:50.934520   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | domain kubernetes-upgrade-907979 has defined IP address 192.168.50.115 and MAC address 52:54:00:24:d4:e4 in network mk-kubernetes-upgrade-907979
	I0229 18:36:50.934644   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHPort
	I0229 18:36:50.934805   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHKeyPath
	I0229 18:36:50.934962   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .GetSSHUsername
	I0229 18:36:50.935131   44016 sshutil.go:53] new ssh client: &{IP:192.168.50.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kubernetes-upgrade-907979/id_rsa Username:docker}
	I0229 18:36:50.984991   44016 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:36:50.985062   44016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:36:50.985153   44016 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:36:51.000264   44016 api_server.go:72] duration metric: took 122.406998ms to wait for apiserver process to appear ...
	I0229 18:36:51.000288   44016 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:36:51.000301   44016 api_server.go:253] Checking apiserver healthz at https://192.168.50.115:8443/healthz ...
	I0229 18:36:51.004409   44016 api_server.go:279] https://192.168.50.115:8443/healthz returned 200:
	ok
	I0229 18:36:51.005597   44016 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:36:51.005613   44016 api_server.go:131] duration metric: took 5.319076ms to wait for apiserver health ...
	I0229 18:36:51.005619   44016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:36:51.010598   44016 system_pods.go:59] 5 kube-system pods found
	I0229 18:36:51.010623   44016 system_pods.go:61] "etcd-kubernetes-upgrade-907979" [5751347f-8fc4-4a06-a86b-03bfe628b494] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:36:51.010635   44016 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-907979" [97632c94-e301-42aa-958d-3369f1e4234d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:36:51.010655   44016 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-907979" [cf4b0f1f-7ddb-4d8b-babb-5df6d6528a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:36:51.010667   44016 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-907979" [31b91f42-e0b0-4cf2-a0a3-4163141ffb30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:36:51.010674   44016 system_pods.go:61] "storage-provisioner" [faa4b55b-a0a0-43c1-993b-eb889ee36f8c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:36:51.010685   44016 system_pods.go:74] duration metric: took 5.059771ms to wait for pod list to return data ...
	I0229 18:36:51.010700   44016 kubeadm.go:581] duration metric: took 132.8455ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 18:36:51.010716   44016 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:36:51.013635   44016 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:36:51.013653   44016 node_conditions.go:123] node cpu capacity is 2
	I0229 18:36:51.013661   44016 node_conditions.go:105] duration metric: took 2.94115ms to run NodePressure ...
	I0229 18:36:51.013672   44016 start.go:228] waiting for startup goroutines ...
	I0229 18:36:51.028963   44016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:36:51.046835   44016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:36:51.746423   44016 main.go:141] libmachine: Making call to close driver server
	I0229 18:36:51.746453   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Close
	I0229 18:36:51.746649   44016 main.go:141] libmachine: Making call to close driver server
	I0229 18:36:51.746671   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Close
	I0229 18:36:51.746778   44016 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:36:51.746846   44016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:36:51.746855   44016 main.go:141] libmachine: Making call to close driver server
	I0229 18:36:51.746893   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Close
	I0229 18:36:51.746921   44016 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:36:51.746938   44016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:36:51.746823   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Closing plugin on server side
	I0229 18:36:51.746947   44016 main.go:141] libmachine: Making call to close driver server
	I0229 18:36:51.746985   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Close
	I0229 18:36:51.747186   44016 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:36:51.747210   44016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:36:51.747292   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) DBG | Closing plugin on server side
	I0229 18:36:51.747336   44016 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:36:51.747367   44016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:36:51.753715   44016 main.go:141] libmachine: Making call to close driver server
	I0229 18:36:51.753754   44016 main.go:141] libmachine: (kubernetes-upgrade-907979) Calling .Close
	I0229 18:36:51.753980   44016 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:36:51.753998   44016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:36:51.755926   44016 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:36:51.757205   44016 addons.go:505] enable addons completed in 883.70923ms: enabled=[storage-provisioner default-storageclass]
	I0229 18:36:51.757252   44016 start.go:233] waiting for cluster config update ...
	I0229 18:36:51.757266   44016 start.go:242] writing updated cluster config ...
	I0229 18:36:51.757484   44016 ssh_runner.go:195] Run: rm -f paused
	I0229 18:36:51.803763   44016 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:36:51.805489   44016 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-907979" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	87258bc922956       d4e01cdf63970       5 seconds ago       Running             kube-controller-manager   1                   2786043252b1f       kube-controller-manager-kubernetes-upgrade-907979
	38e40a504c452       bbb47a0f83324       5 seconds ago       Running             kube-apiserver            1                   f7a63ba0bc950       kube-apiserver-kubernetes-upgrade-907979
	3e4d20065b4df       4270645ed6b7a       18 seconds ago      Running             kube-scheduler            1                   724f2794ce0eb       kube-scheduler-kubernetes-upgrade-907979
	3e13c5ad78d6d       a0eed15eed449       18 seconds ago      Running             etcd                      1                   ba58877e11e83       etcd-kubernetes-upgrade-907979
	7a384c9004312       4270645ed6b7a       32 seconds ago      Exited              kube-scheduler            0                   724f2794ce0eb       kube-scheduler-kubernetes-upgrade-907979
	dbc756291634a       a0eed15eed449       32 seconds ago      Exited              etcd                      0                   ba58877e11e83       etcd-kubernetes-upgrade-907979
	083844b357296       bbb47a0f83324       32 seconds ago      Exited              kube-apiserver            0                   f7a63ba0bc950       kube-apiserver-kubernetes-upgrade-907979
	15ce367d57e68       d4e01cdf63970       32 seconds ago      Exited              kube-controller-manager   0                   2786043252b1f       kube-controller-manager-kubernetes-upgrade-907979
	
	
	==> containerd <==
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.621412983Z" level=info msg="CreateContainer within sandbox \"ba58877e11e83af5cc21577fafa50ea1d4117272d432cc5eac844d4acece605a\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"3e13c5ad78d6dcfb29dbf0406922c63210c7b8b662e669addff864509cc0e431\""
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.622627370Z" level=info msg="StartContainer for \"3e13c5ad78d6dcfb29dbf0406922c63210c7b8b662e669addff864509cc0e431\""
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.632369131Z" level=info msg="CreateContainer within sandbox \"724f2794ce0ebe1c00ac0b8b4a90f2ff18c0cfbe0c4332fa259c2e4f28d620aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3e4d20065b4df7be5ed33e47abd0ab0a0ac15a0aaae68a2788d4e12c12a96a4b\""
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.633219766Z" level=info msg="StartContainer for \"3e4d20065b4df7be5ed33e47abd0ab0a0ac15a0aaae68a2788d4e12c12a96a4b\""
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.721620158Z" level=info msg="StartContainer for \"3e13c5ad78d6dcfb29dbf0406922c63210c7b8b662e669addff864509cc0e431\" returns successfully"
	Feb 29 18:36:34 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:34.743010404Z" level=info msg="StartContainer for \"3e4d20065b4df7be5ed33e47abd0ab0a0ac15a0aaae68a2788d4e12c12a96a4b\" returns successfully"
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.285381400Z" level=info msg="Kill container \"083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c\""
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.347823244Z" level=info msg="shim disconnected" id=083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.348007524Z" level=warning msg="cleaning up after shim disconnected" id=083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.348023931Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.375474303Z" level=info msg="StopContainer for \"083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c\" returns successfully"
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.377185369Z" level=info msg="StopContainer for \"15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c\" with timeout 10 (s)"
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.377978077Z" level=info msg="Stop container \"15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c\" with signal terminated"
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.437630163Z" level=info msg="shim disconnected" id=15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.437919330Z" level=warning msg="cleaning up after shim disconnected" id=15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.438052065Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 29 18:36:44 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:44.457371006Z" level=info msg="StopContainer for \"15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c\" returns successfully"
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.831680898Z" level=info msg="CreateContainer within sandbox \"2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.834080662Z" level=info msg="CreateContainer within sandbox \"f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.857418927Z" level=info msg="CreateContainer within sandbox \"f7a63ba0bc9503da00c988f6b2c68aedaa45cd24412b8616058106953667756b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"38e40a504c452d441cdb59785ac370bca377677e68d2c17a16b5a4c21b2bba9e\""
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.858176600Z" level=info msg="StartContainer for \"38e40a504c452d441cdb59785ac370bca377677e68d2c17a16b5a4c21b2bba9e\""
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.865481485Z" level=info msg="CreateContainer within sandbox \"2786043252b1f2c738b57200a41c69049c88876c245a82c52840210eda6788bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87258bc9229569a33d0d90746dd2e1c47ca81a7d086e67815a9e10e70025516a\""
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.866118951Z" level=info msg="StartContainer for \"87258bc9229569a33d0d90746dd2e1c47ca81a7d086e67815a9e10e70025516a\""
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.984660645Z" level=info msg="StartContainer for \"87258bc9229569a33d0d90746dd2e1c47ca81a7d086e67815a9e10e70025516a\" returns successfully"
	Feb 29 18:36:46 kubernetes-upgrade-907979 containerd[1328]: time="2024-02-29T18:36:46.994111603Z" level=info msg="StartContainer for \"38e40a504c452d441cdb59785ac370bca377677e68d2c17a16b5a4c21b2bba9e\" returns successfully"
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-907979
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-907979
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:36:23 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-907979
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:36:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:36:49 +0000   Thu, 29 Feb 2024 18:36:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:36:49 +0000   Thu, 29 Feb 2024 18:36:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:36:49 +0000   Thu, 29 Feb 2024 18:36:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:36:49 +0000   Thu, 29 Feb 2024 18:36:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.115
	  Hostname:    kubernetes-upgrade-907979
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb8c6d6a27ca4a7aac82e266e75677fc
	  System UUID:                cb8c6d6a-27ca-4a7a-ac82-e266e75677fc
	  Boot ID:                    163a18bd-47ab-493e-8261-12dd92b00768
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-907979                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 kube-apiserver-kubernetes-upgrade-907979             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-907979    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-scheduler-kubernetes-upgrade-907979             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 33s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet  Node kubernetes-upgrade-907979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet  Node kubernetes-upgrade-907979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 33s)  kubelet  Node kubernetes-upgrade-907979 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053111] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.552543] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 18:36] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.696817] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.676468] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
	[  +0.062575] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070703] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.168587] systemd-fstab-generator[503]: Ignoring "noauto" option for root device
	[  +0.143442] systemd-fstab-generator[515]: Ignoring "noauto" option for root device
	[  +0.275971] systemd-fstab-generator[544]: Ignoring "noauto" option for root device
	[  +5.377591] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.059246] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.797727] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[ +11.991337] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.068683] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.070279] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	[  +0.197628] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.141331] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.402624] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +12.370949] kauditd_printk_skb: 139 callbacks suppressed
	[  +1.835821] systemd-fstab-generator[1786]: Ignoring "noauto" option for root device
	
	
	==> etcd [3e13c5ad78d6dcfb29dbf0406922c63210c7b8b662e669addff864509cc0e431] <==
	{"level":"info","ts":"2024-02-29T18:36:34.795351Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"60ed51a5023475a3","initial-advertise-peer-urls":["https://192.168.50.115:2380"],"listen-peer-urls":["https://192.168.50.115:2380"],"advertise-client-urls":["https://192.168.50.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:36:34.7954Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:36:34.795438Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"60ed51a5023475a3","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-02-29T18:36:34.796536Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.115:2380"}
	{"level":"info","ts":"2024-02-29T18:36:34.796576Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.115:2380"}
	{"level":"info","ts":"2024-02-29T18:36:34.79631Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:36:34.798187Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:36:34.798226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:36:35.076563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:35.076625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:35.076661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 received MsgPreVoteResp from 60ed51a5023475a3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:35.076671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:36:35.076677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 received MsgVoteResp from 60ed51a5023475a3 at term 3"}
	{"level":"info","ts":"2024-02-29T18:36:35.076924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:36:35.076937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 60ed51a5023475a3 elected leader 60ed51a5023475a3 at term 3"}
	{"level":"info","ts":"2024-02-29T18:36:35.082362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"60ed51a5023475a3","local-member-attributes":"{Name:kubernetes-upgrade-907979 ClientURLs:[https://192.168.50.115:2379]}","request-path":"/0/members/60ed51a5023475a3/attributes","cluster-id":"df337350b136b0b9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:36:35.082777Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:36:35.084883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:36:35.085316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:36:35.087125Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.115:2379"}
	{"level":"info","ts":"2024-02-29T18:36:35.087399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:36:35.087437Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:36:35.458922Z","caller":"traceutil/trace.go:171","msg":"trace[1160490844] linearizableReadLoop","detail":"{readStateIndex:297; appliedIndex:297; }","duration":"159.556446ms","start":"2024-02-29T18:36:35.297259Z","end":"2024-02-29T18:36:35.456816Z","steps":["trace[1160490844] 'read index received'  (duration: 159.553523ms)","trace[1160490844] 'applied index is now lower than readState.Index'  (duration: 2.106µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T18:36:35.459275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.987854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-scheduler-kubernetes-upgrade-907979.17b8693f8cbc4ce5\" ","response":"range_response_count:1 size:882"}
	{"level":"info","ts":"2024-02-29T18:36:35.460939Z","caller":"traceutil/trace.go:171","msg":"trace[421236367] range","detail":"{range_begin:/registry/events/kube-system/kube-scheduler-kubernetes-upgrade-907979.17b8693f8cbc4ce5; range_end:; response_count:1; response_revision:289; }","duration":"163.675338ms","start":"2024-02-29T18:36:35.297254Z","end":"2024-02-29T18:36:35.46093Z","steps":["trace[421236367] 'agreement among raft nodes before linearized reading'  (duration: 161.906417ms)"],"step_count":1}
	
	
	==> etcd [dbc756291634a1c1cf614048ca0bd095b3dd57294a2cb14809d8f54630e1a837] <==
	{"level":"info","ts":"2024-02-29T18:36:21.909234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:21.909264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 received MsgVoteResp from 60ed51a5023475a3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:21.909382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"60ed51a5023475a3 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:21.90943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 60ed51a5023475a3 elected leader 60ed51a5023475a3 at term 2"}
	{"level":"info","ts":"2024-02-29T18:36:21.912589Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"60ed51a5023475a3","local-member-attributes":"{Name:kubernetes-upgrade-907979 ClientURLs:[https://192.168.50.115:2379]}","request-path":"/0/members/60ed51a5023475a3/attributes","cluster-id":"df337350b136b0b9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:36:21.912763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:36:21.9149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:36:21.926144Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:36:21.931932Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:36:21.931981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:36:21.937576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:36:21.940539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.115:2379"}
	{"level":"info","ts":"2024-02-29T18:36:21.940653Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"df337350b136b0b9","local-member-id":"60ed51a5023475a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:36:21.944001Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:36:21.945089Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:36:34.182809Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T18:36:34.183029Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-907979","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.115:2380"],"advertise-client-urls":["https://192.168.50.115:2379"]}
	{"level":"warn","ts":"2024-02-29T18:36:34.18311Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:36:34.183221Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:36:34.208156Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:36:34.208268Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.115:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T18:36:34.208634Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"60ed51a5023475a3","current-leader-member-id":"60ed51a5023475a3"}
	{"level":"info","ts":"2024-02-29T18:36:34.212461Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.115:2380"}
	{"level":"info","ts":"2024-02-29T18:36:34.212968Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.115:2380"}
	{"level":"info","ts":"2024-02-29T18:36:34.213099Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-907979","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.115:2380"],"advertise-client-urls":["https://192.168.50.115:2379"]}
	
	
	==> kernel <==
	 18:36:52 up 1 min,  0 users,  load average: 1.24, 0.31, 0.10
	Linux kubernetes-upgrade-907979 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c] <==
	I0229 18:36:35.572101       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:36:35.571943       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0229 18:36:35.572168       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0229 18:36:35.572179       1 establishing_controller.go:87] Shutting down EstablishingController
	I0229 18:36:35.572190       1 naming_controller.go:302] Shutting down NamingConditionController
	I0229 18:36:35.572418       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0229 18:36:35.572464       1 controller.go:161] Shutting down OpenAPI controller
	I0229 18:36:35.572555       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0229 18:36:35.572964       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0229 18:36:35.573003       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0229 18:36:35.573017       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0229 18:36:35.574385       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0229 18:36:35.574429       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:36:35.571906       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0229 18:36:35.571912       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0229 18:36:35.571919       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0229 18:36:35.571924       1 controller.go:129] Ending legacy_token_tracking_controller
	I0229 18:36:35.581721       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0229 18:36:35.571932       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0229 18:36:35.588985       1 controller.go:159] Shutting down quota evaluator
	I0229 18:36:35.589026       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:36:35.589555       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:36:35.589592       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:36:35.589599       1 controller.go:178] quota evaluator worker shutdown
	I0229 18:36:35.589604       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [38e40a504c452d441cdb59785ac370bca377677e68d2c17a16b5a4c21b2bba9e] <==
	I0229 18:36:49.084145       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 18:36:49.079429       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:36:49.079587       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:36:49.179670       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:36:49.183346       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 18:36:49.183502       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 18:36:49.185042       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:36:49.185464       1 aggregator.go:165] initial CRD sync complete...
	I0229 18:36:49.185497       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:36:49.185503       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:36:49.185514       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:36:49.186179       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:36:49.197298       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:36:49.275605       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:36:49.276184       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:36:49.278216       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:36:50.083154       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 18:36:50.290997       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.115]
	I0229 18:36:50.292532       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:36:50.298720       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 18:36:50.678897       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 18:36:50.688401       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 18:36:50.719984       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 18:36:50.746138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 18:36:50.756241       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c] <==
	I0229 18:36:26.372797       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0229 18:36:26.372924       1 disruption.go:433] "Sending events to api server."
	I0229 18:36:26.373307       1 disruption.go:444] "Starting disruption controller"
	I0229 18:36:26.373388       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0229 18:36:26.523031       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0229 18:36:26.523202       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0229 18:36:26.523228       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0229 18:36:26.674799       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0229 18:36:26.675116       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0229 18:36:26.675343       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0229 18:36:26.675377       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0229 18:36:26.822537       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0229 18:36:26.822925       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0229 18:36:26.822959       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0229 18:36:26.972296       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0229 18:36:26.972776       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0229 18:36:26.972792       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0229 18:36:27.021333       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0229 18:36:27.021427       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0229 18:36:27.021649       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	W0229 18:36:37.073826       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:37.574753       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:38.576027       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:40.576583       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:40.577208       1 cidr_allocator.go:144] "Failed to list all nodes" err="Get \"https://192.168.50.115:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-controller-manager [87258bc9229569a33d0d90746dd2e1c47ca81a7d086e67815a9e10e70025516a] <==
	I0229 18:36:51.182056       1 shared_informer.go:311] Waiting for caches to sync for job
	I0229 18:36:51.188368       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0229 18:36:51.188615       1 disruption.go:433] "Sending events to api server."
	I0229 18:36:51.188915       1 disruption.go:444] "Starting disruption controller"
	I0229 18:36:51.189019       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0229 18:36:51.191388       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0229 18:36:51.191773       1 controller.go:169] "Starting ephemeral volume controller"
	I0229 18:36:51.191922       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0229 18:36:51.203737       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0229 18:36:51.204099       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0229 18:36:51.204347       1 expand_controller.go:328] "Starting expand controller"
	I0229 18:36:51.204591       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0229 18:36:51.210759       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0229 18:36:51.211255       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0229 18:36:51.211268       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0229 18:36:51.246066       1 shared_informer.go:318] Caches are synced for tokens
	I0229 18:36:51.252081       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0229 18:36:51.252343       1 namespace_controller.go:197] "Starting namespace controller"
	I0229 18:36:51.252725       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0229 18:36:51.261587       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0229 18:36:51.262204       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0229 18:36:51.262309       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0229 18:36:51.265789       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0229 18:36:51.266190       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0229 18:36:51.266296       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	
	
	==> kube-scheduler [3e4d20065b4df7be5ed33e47abd0ab0a0ac15a0aaae68a2788d4e12c12a96a4b] <==
	W0229 18:36:44.491674       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.50.115:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:44.491743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.115:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:44.506414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.50.115:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:44.506474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.115:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:44.945193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.115:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:44.945264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.115:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.056616       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.50.115:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.056758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.115:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.150294       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.50.115:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.150336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.50.115:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.706101       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.115:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.706136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.115:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.722831       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.50.115:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.722963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.115:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.818308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.50.115:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.818343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.115:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.832020       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.50.115:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.832053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.115:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.961324       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.961390       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.115:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:45.995113       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.50.115:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:45.995186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.115:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	W0229 18:36:46.419020       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.50.115:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	E0229 18:36:46.419104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.115:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.115:8443: connect: connection refused
	I0229 18:36:53.060563       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7a384c9004312ccc1aa73c4a9bb35f326decfb125dae1e6454389d1a7930c0bb] <==
	W0229 18:36:24.248763       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 18:36:24.248823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 18:36:24.250160       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:36:24.250207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:36:24.267239       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 18:36:24.267294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 18:36:24.335178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 18:36:24.335244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 18:36:24.359771       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 18:36:24.359826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 18:36:24.462946       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 18:36:24.463000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 18:36:24.517284       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 18:36:24.517357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:36:24.570764       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:36:24.571544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 18:36:24.652050       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 18:36:24.652107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 18:36:24.907294       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 18:36:24.907348       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 18:36:26.672381       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:36:34.094749       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 18:36:34.094790       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:36:34.094961       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 18:36:34.095936       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.525945    1793 topology_manager.go:215] "Topology Admit Handler" podUID="c29c292bc3bfa54b70acde7052369abc" podNamespace="kube-system" podName="kube-scheduler-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: E0229 18:36:46.622979    1793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-907979?timeout=10s\": dial tcp 192.168.50.115:8443: connect: connection refused" interval="400ms"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.718187    1793 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.718749    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a57e018ff431e6e012196ef34825ee14-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-907979\" (UID: \"a57e018ff431e6e012196ef34825ee14\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: E0229 18:36:46.719278    1793 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.115:8443: connect: connection refused" node="kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719337    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a57e018ff431e6e012196ef34825ee14-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-907979\" (UID: \"a57e018ff431e6e012196ef34825ee14\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719408    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c29c292bc3bfa54b70acde7052369abc-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-907979\" (UID: \"c29c292bc3bfa54b70acde7052369abc\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719430    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ff527c560060b938e6f8376fc4ddabc-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-907979\" (UID: \"7ff527c560060b938e6f8376fc4ddabc\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719450    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a57e018ff431e6e012196ef34825ee14-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-907979\" (UID: \"a57e018ff431e6e012196ef34825ee14\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719471    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a57e018ff431e6e012196ef34825ee14-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-907979\" (UID: \"a57e018ff431e6e012196ef34825ee14\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719490    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a57e018ff431e6e012196ef34825ee14-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-907979\" (UID: \"a57e018ff431e6e012196ef34825ee14\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719510    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ff527c560060b938e6f8376fc4ddabc-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-907979\" (UID: \"7ff527c560060b938e6f8376fc4ddabc\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719562    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e7688532cad5f44712a5a7efb453ee36-etcd-certs\") pod \"etcd-kubernetes-upgrade-907979\" (UID: \"e7688532cad5f44712a5a7efb453ee36\") " pod="kube-system/etcd-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719585    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e7688532cad5f44712a5a7efb453ee36-etcd-data\") pod \"etcd-kubernetes-upgrade-907979\" (UID: \"e7688532cad5f44712a5a7efb453ee36\") " pod="kube-system/etcd-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.719606    1793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ff527c560060b938e6f8376fc4ddabc-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-907979\" (UID: \"7ff527c560060b938e6f8376fc4ddabc\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-907979"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.827385    1793 scope.go:117] "RemoveContainer" containerID="15ce367d57e68e74d93de1ce0a7a39154b46b868203ebc2844a3798911e7ad6c"
	Feb 29 18:36:46 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:46.828051    1793 scope.go:117] "RemoveContainer" containerID="083844b35729676002212107a1e87d699d57495b5feaaaa5927b988182c4479c"
	Feb 29 18:36:47 kubernetes-upgrade-907979 kubelet[1793]: E0229 18:36:47.024179    1793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-907979?timeout=10s\": dial tcp 192.168.50.115:8443: connect: connection refused" interval="800ms"
	Feb 29 18:36:47 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:47.120608    1793 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-907979"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:49.237762    1793 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-907979"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:49.237925    1793 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-907979"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:49.406375    1793 apiserver.go:52] "Watching apiserver"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: I0229 18:36:49.419668    1793 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: E0229 18:36:49.489936    1793 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-907979\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-907979"
	Feb 29 18:36:49 kubernetes-upgrade-907979 kubelet[1793]: E0229 18:36:49.491071    1793 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-907979\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-907979"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-907979 -n kubernetes-upgrade-907979
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-907979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-907979 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-907979 describe pod storage-provisioner: exit status 1 (68.509481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-907979 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-907979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-907979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-907979: (1.139024504s)
--- FAIL: TestKubernetesUpgrade (361.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (303s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 109 (5m2.707523021s)

                                                
                                                
-- stdout --
	* [old-k8s-version-561577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-561577 in cluster old-k8s-version-561577
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:31:38.377856   41533 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:31:38.377991   41533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:31:38.378001   41533 out.go:304] Setting ErrFile to fd 2...
	I0229 18:31:38.378005   41533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:31:38.378200   41533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:31:38.378786   41533 out.go:298] Setting JSON to false
	I0229 18:31:38.379639   41533 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4440,"bootTime":1709227059,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:31:38.379695   41533 start.go:139] virtualization: kvm guest
	I0229 18:31:38.382111   41533 out.go:177] * [old-k8s-version-561577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:31:38.383605   41533 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:31:38.383612   41533 notify.go:220] Checking for updates...
	I0229 18:31:38.385081   41533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:31:38.386685   41533 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:31:38.387892   41533 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:31:38.389142   41533 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:31:38.390678   41533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:31:38.392200   41533 config.go:182] Loaded profile config "cert-expiration-829233": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:31:38.392292   41533 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:31:38.392398   41533 config.go:182] Loaded profile config "stopped-upgrade-475131": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0229 18:31:38.392481   41533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:31:38.426164   41533 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:31:38.427378   41533 start.go:299] selected driver: kvm2
	I0229 18:31:38.427389   41533 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:31:38.427399   41533 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:31:38.428056   41533 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:31:38.428151   41533 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:31:38.441930   41533 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:31:38.441965   41533 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:31:38.442150   41533 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:31:38.442210   41533 cni.go:84] Creating CNI manager for ""
	I0229 18:31:38.442223   41533 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:31:38.442234   41533 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:31:38.442242   41533 start_flags.go:323] config:
	{Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:31:38.442356   41533 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:31:38.443859   41533 out.go:177] * Starting control plane node old-k8s-version-561577 in cluster old-k8s-version-561577
	I0229 18:31:38.444974   41533 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:31:38.444996   41533 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 18:31:38.445003   41533 cache.go:56] Caching tarball of preloaded images
	I0229 18:31:38.445058   41533 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:31:38.445067   41533 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 18:31:38.445141   41533 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/config.json ...
	I0229 18:31:38.445157   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/config.json: {Name:mk9e5dc95c177594a1139463be6d9cca893320eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:31:38.445257   41533 start.go:365] acquiring machines lock for old-k8s-version-561577: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:32:05.447345   41533 start.go:369] acquired machines lock for "old-k8s-version-561577" in 27.002028916s
	I0229 18:32:05.447437   41533 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:32:05.447603   41533 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:32:05.450004   41533 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:32:05.450251   41533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:32:05.450304   41533 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:32:05.466827   41533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0229 18:32:05.467281   41533 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:32:05.467788   41533 main.go:141] libmachine: Using API Version  1
	I0229 18:32:05.467813   41533 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:32:05.468501   41533 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:32:05.468827   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:32:05.470182   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:05.470370   41533 start.go:159] libmachine.API.Create for "old-k8s-version-561577" (driver="kvm2")
	I0229 18:32:05.470400   41533 client.go:168] LocalClient.Create starting
	I0229 18:32:05.470463   41533 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 18:32:05.470504   41533 main.go:141] libmachine: Decoding PEM data...
	I0229 18:32:05.470525   41533 main.go:141] libmachine: Parsing certificate...
	I0229 18:32:05.470619   41533 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 18:32:05.470650   41533 main.go:141] libmachine: Decoding PEM data...
	I0229 18:32:05.470677   41533 main.go:141] libmachine: Parsing certificate...
	I0229 18:32:05.470720   41533 main.go:141] libmachine: Running pre-create checks...
	I0229 18:32:05.470741   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .PreCreateCheck
	I0229 18:32:05.471143   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetConfigRaw
	I0229 18:32:05.471721   41533 main.go:141] libmachine: Creating machine...
	I0229 18:32:05.471741   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .Create
	I0229 18:32:05.471887   41533 main.go:141] libmachine: (old-k8s-version-561577) Creating KVM machine...
	I0229 18:32:05.473139   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found existing default KVM network
	I0229 18:32:05.474921   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:05.474776   41785 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204bf0}
	I0229 18:32:05.479969   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | trying to create private KVM network mk-old-k8s-version-561577 192.168.39.0/24...
	I0229 18:32:05.547130   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | private KVM network mk-old-k8s-version-561577 192.168.39.0/24 created
	I0229 18:32:05.547159   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577 ...
	I0229 18:32:05.547173   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:05.547087   41785 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:32:05.547192   41533 main.go:141] libmachine: (old-k8s-version-561577) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:32:05.547243   41533 main.go:141] libmachine: (old-k8s-version-561577) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:32:05.774730   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:05.774592   41785 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa...
	I0229 18:32:05.921056   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:05.920943   41785 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/old-k8s-version-561577.rawdisk...
	I0229 18:32:05.921085   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Writing magic tar header
	I0229 18:32:05.921098   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Writing SSH key tar header
	I0229 18:32:05.921115   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:05.921051   41785 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577 ...
	I0229 18:32:05.921129   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577
	I0229 18:32:05.921204   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577 (perms=drwx------)
	I0229 18:32:05.921225   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:32:05.921233   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 18:32:05.921242   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:32:05.921252   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 18:32:05.921267   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:32:05.921280   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:32:05.921290   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Checking permissions on dir: /home
	I0229 18:32:05.921299   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Skipping /home - not owner
	I0229 18:32:05.921343   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 18:32:05.921370   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 18:32:05.921389   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:32:05.921403   41533 main.go:141] libmachine: (old-k8s-version-561577) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:32:05.921420   41533 main.go:141] libmachine: (old-k8s-version-561577) Creating domain...
	I0229 18:32:05.922324   41533 main.go:141] libmachine: (old-k8s-version-561577) define libvirt domain using xml: 
	I0229 18:32:05.922344   41533 main.go:141] libmachine: (old-k8s-version-561577) <domain type='kvm'>
	I0229 18:32:05.922351   41533 main.go:141] libmachine: (old-k8s-version-561577)   <name>old-k8s-version-561577</name>
	I0229 18:32:05.922357   41533 main.go:141] libmachine: (old-k8s-version-561577)   <memory unit='MiB'>2200</memory>
	I0229 18:32:05.922362   41533 main.go:141] libmachine: (old-k8s-version-561577)   <vcpu>2</vcpu>
	I0229 18:32:05.922371   41533 main.go:141] libmachine: (old-k8s-version-561577)   <features>
	I0229 18:32:05.922382   41533 main.go:141] libmachine: (old-k8s-version-561577)     <acpi/>
	I0229 18:32:05.922386   41533 main.go:141] libmachine: (old-k8s-version-561577)     <apic/>
	I0229 18:32:05.922392   41533 main.go:141] libmachine: (old-k8s-version-561577)     <pae/>
	I0229 18:32:05.922396   41533 main.go:141] libmachine: (old-k8s-version-561577)     
	I0229 18:32:05.922404   41533 main.go:141] libmachine: (old-k8s-version-561577)   </features>
	I0229 18:32:05.922408   41533 main.go:141] libmachine: (old-k8s-version-561577)   <cpu mode='host-passthrough'>
	I0229 18:32:05.922413   41533 main.go:141] libmachine: (old-k8s-version-561577)   
	I0229 18:32:05.922420   41533 main.go:141] libmachine: (old-k8s-version-561577)   </cpu>
	I0229 18:32:05.922424   41533 main.go:141] libmachine: (old-k8s-version-561577)   <os>
	I0229 18:32:05.922429   41533 main.go:141] libmachine: (old-k8s-version-561577)     <type>hvm</type>
	I0229 18:32:05.922434   41533 main.go:141] libmachine: (old-k8s-version-561577)     <boot dev='cdrom'/>
	I0229 18:32:05.922439   41533 main.go:141] libmachine: (old-k8s-version-561577)     <boot dev='hd'/>
	I0229 18:32:05.922444   41533 main.go:141] libmachine: (old-k8s-version-561577)     <bootmenu enable='no'/>
	I0229 18:32:05.922455   41533 main.go:141] libmachine: (old-k8s-version-561577)   </os>
	I0229 18:32:05.922463   41533 main.go:141] libmachine: (old-k8s-version-561577)   <devices>
	I0229 18:32:05.922469   41533 main.go:141] libmachine: (old-k8s-version-561577)     <disk type='file' device='cdrom'>
	I0229 18:32:05.922482   41533 main.go:141] libmachine: (old-k8s-version-561577)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/boot2docker.iso'/>
	I0229 18:32:05.922493   41533 main.go:141] libmachine: (old-k8s-version-561577)       <target dev='hdc' bus='scsi'/>
	I0229 18:32:05.922498   41533 main.go:141] libmachine: (old-k8s-version-561577)       <readonly/>
	I0229 18:32:05.922503   41533 main.go:141] libmachine: (old-k8s-version-561577)     </disk>
	I0229 18:32:05.922507   41533 main.go:141] libmachine: (old-k8s-version-561577)     <disk type='file' device='disk'>
	I0229 18:32:05.922517   41533 main.go:141] libmachine: (old-k8s-version-561577)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:32:05.922524   41533 main.go:141] libmachine: (old-k8s-version-561577)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/old-k8s-version-561577.rawdisk'/>
	I0229 18:32:05.922532   41533 main.go:141] libmachine: (old-k8s-version-561577)       <target dev='hda' bus='virtio'/>
	I0229 18:32:05.922537   41533 main.go:141] libmachine: (old-k8s-version-561577)     </disk>
	I0229 18:32:05.922559   41533 main.go:141] libmachine: (old-k8s-version-561577)     <interface type='network'>
	I0229 18:32:05.922571   41533 main.go:141] libmachine: (old-k8s-version-561577)       <source network='mk-old-k8s-version-561577'/>
	I0229 18:32:05.922576   41533 main.go:141] libmachine: (old-k8s-version-561577)       <model type='virtio'/>
	I0229 18:32:05.922584   41533 main.go:141] libmachine: (old-k8s-version-561577)     </interface>
	I0229 18:32:05.922615   41533 main.go:141] libmachine: (old-k8s-version-561577)     <interface type='network'>
	I0229 18:32:05.922636   41533 main.go:141] libmachine: (old-k8s-version-561577)       <source network='default'/>
	I0229 18:32:05.922646   41533 main.go:141] libmachine: (old-k8s-version-561577)       <model type='virtio'/>
	I0229 18:32:05.922654   41533 main.go:141] libmachine: (old-k8s-version-561577)     </interface>
	I0229 18:32:05.922668   41533 main.go:141] libmachine: (old-k8s-version-561577)     <serial type='pty'>
	I0229 18:32:05.922680   41533 main.go:141] libmachine: (old-k8s-version-561577)       <target port='0'/>
	I0229 18:32:05.922692   41533 main.go:141] libmachine: (old-k8s-version-561577)     </serial>
	I0229 18:32:05.922703   41533 main.go:141] libmachine: (old-k8s-version-561577)     <console type='pty'>
	I0229 18:32:05.922715   41533 main.go:141] libmachine: (old-k8s-version-561577)       <target type='serial' port='0'/>
	I0229 18:32:05.922735   41533 main.go:141] libmachine: (old-k8s-version-561577)     </console>
	I0229 18:32:05.922746   41533 main.go:141] libmachine: (old-k8s-version-561577)     <rng model='virtio'>
	I0229 18:32:05.922756   41533 main.go:141] libmachine: (old-k8s-version-561577)       <backend model='random'>/dev/random</backend>
	I0229 18:32:05.922769   41533 main.go:141] libmachine: (old-k8s-version-561577)     </rng>
	I0229 18:32:05.922779   41533 main.go:141] libmachine: (old-k8s-version-561577)     
	I0229 18:32:05.922787   41533 main.go:141] libmachine: (old-k8s-version-561577)     
	I0229 18:32:05.922797   41533 main.go:141] libmachine: (old-k8s-version-561577)   </devices>
	I0229 18:32:05.922816   41533 main.go:141] libmachine: (old-k8s-version-561577) </domain>
	I0229 18:32:05.922839   41533 main.go:141] libmachine: (old-k8s-version-561577) 
	I0229 18:32:05.929553   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:11:19:5b in network default
	I0229 18:32:05.930091   41533 main.go:141] libmachine: (old-k8s-version-561577) Ensuring networks are active...
	I0229 18:32:05.930111   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:05.930866   41533 main.go:141] libmachine: (old-k8s-version-561577) Ensuring network default is active
	I0229 18:32:05.931150   41533 main.go:141] libmachine: (old-k8s-version-561577) Ensuring network mk-old-k8s-version-561577 is active
	I0229 18:32:05.931715   41533 main.go:141] libmachine: (old-k8s-version-561577) Getting domain xml...
	I0229 18:32:05.932310   41533 main.go:141] libmachine: (old-k8s-version-561577) Creating domain...
	I0229 18:32:07.138510   41533 main.go:141] libmachine: (old-k8s-version-561577) Waiting to get IP...
	I0229 18:32:07.139247   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:07.139734   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:07.139805   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:07.139736   41785 retry.go:31] will retry after 254.349055ms: waiting for machine to come up
	I0229 18:32:07.396082   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:07.396555   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:07.396588   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:07.396498   41785 retry.go:31] will retry after 293.265069ms: waiting for machine to come up
	I0229 18:32:07.691066   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:07.691614   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:07.691642   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:07.691566   41785 retry.go:31] will retry after 401.848111ms: waiting for machine to come up
	I0229 18:32:08.095077   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:08.095581   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:08.095610   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:08.095520   41785 retry.go:31] will retry after 465.902488ms: waiting for machine to come up
	I0229 18:32:08.563119   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:08.563565   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:08.563596   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:08.563514   41785 retry.go:31] will retry after 746.061647ms: waiting for machine to come up
	I0229 18:32:09.311047   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:09.311540   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:09.311567   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:09.311497   41785 retry.go:31] will retry after 669.919332ms: waiting for machine to come up
	I0229 18:32:09.983369   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:09.983897   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:09.983926   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:09.983853   41785 retry.go:31] will retry after 740.330077ms: waiting for machine to come up
	I0229 18:32:10.726255   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:10.726832   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:10.726855   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:10.726789   41785 retry.go:31] will retry after 1.373899377s: waiting for machine to come up
	I0229 18:32:12.102093   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:12.102699   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:12.102723   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:12.102654   41785 retry.go:31] will retry after 1.380437528s: waiting for machine to come up
	I0229 18:32:13.484202   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:13.484846   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:13.484879   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:13.484776   41785 retry.go:31] will retry after 1.902822682s: waiting for machine to come up
	I0229 18:32:15.389601   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:15.390123   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:15.390154   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:15.390068   41785 retry.go:31] will retry after 2.186214319s: waiting for machine to come up
	I0229 18:32:17.578848   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:17.579357   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:17.579388   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:17.579294   41785 retry.go:31] will retry after 2.282008825s: waiting for machine to come up
	I0229 18:32:19.863704   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:19.864134   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:19.864154   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:19.864096   41785 retry.go:31] will retry after 3.221566982s: waiting for machine to come up
	I0229 18:32:23.086798   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:23.087258   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:23.087281   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:23.087202   41785 retry.go:31] will retry after 3.725691028s: waiting for machine to come up
	I0229 18:32:26.814353   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:26.814980   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:32:26.815012   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:32:26.814935   41785 retry.go:31] will retry after 5.756739559s: waiting for machine to come up
	I0229 18:32:32.574113   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.574610   41533 main.go:141] libmachine: (old-k8s-version-561577) Found IP for machine: 192.168.39.66
	I0229 18:32:32.574637   41533 main.go:141] libmachine: (old-k8s-version-561577) Reserving static IP address...
	I0229 18:32:32.574653   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has current primary IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.574958   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-561577", mac: "52:54:00:88:1b:5b", ip: "192.168.39.66"} in network mk-old-k8s-version-561577
	I0229 18:32:32.645241   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Getting to WaitForSSH function...
	I0229 18:32:32.645268   41533 main.go:141] libmachine: (old-k8s-version-561577) Reserved static IP address: 192.168.39.66
	I0229 18:32:32.645284   41533 main.go:141] libmachine: (old-k8s-version-561577) Waiting for SSH to be available...
	I0229 18:32:32.648312   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.648667   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:32.648698   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.648837   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Using SSH client type: external
	I0229 18:32:32.648878   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa (-rw-------)
	I0229 18:32:32.648914   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:32:32.648931   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | About to run SSH command:
	I0229 18:32:32.648949   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | exit 0
	I0229 18:32:32.780062   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | SSH cmd err, output: <nil>: 
	I0229 18:32:32.780317   41533 main.go:141] libmachine: (old-k8s-version-561577) KVM machine creation complete!
	I0229 18:32:32.780664   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetConfigRaw
	I0229 18:32:32.781251   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:32.781461   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:32.781683   41533 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:32:32.781702   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetState
	I0229 18:32:32.783276   41533 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:32:32.783292   41533 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:32:32.783302   41533 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:32:32.783312   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:32.785916   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.786535   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:32.786580   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.786756   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:32.786967   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:32.787185   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:32.787365   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:32.787559   41533 main.go:141] libmachine: Using SSH client type: native
	I0229 18:32:32.787801   41533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:32:32.787817   41533 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:32:32.903262   41533 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:32:32.903285   41533 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:32:32.903295   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:32.906449   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.906908   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:32.906941   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:32.907145   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:32.907373   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:32.907543   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:32.907670   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:32.907835   41533 main.go:141] libmachine: Using SSH client type: native
	I0229 18:32:32.908054   41533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:32:32.908074   41533 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:32:33.031673   41533 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:32:33.031754   41533 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:32:33.031768   41533 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:32:33.031783   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:32:33.032008   41533 buildroot.go:166] provisioning hostname "old-k8s-version-561577"
	I0229 18:32:33.032029   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:32:33.032195   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.034866   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.035261   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.035301   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.035381   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.035568   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.035734   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.035922   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.036072   41533 main.go:141] libmachine: Using SSH client type: native
	I0229 18:32:33.036234   41533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:32:33.036245   41533 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-561577 && echo "old-k8s-version-561577" | sudo tee /etc/hostname
	I0229 18:32:33.171241   41533 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-561577
	
	I0229 18:32:33.171273   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.174049   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.174372   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.174401   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.174520   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.174734   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.174917   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.175070   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.175259   41533 main.go:141] libmachine: Using SSH client type: native
	I0229 18:32:33.175407   41533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:32:33.175424   41533 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-561577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-561577/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-561577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:32:33.302023   41533 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:32:33.302051   41533 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:32:33.302091   41533 buildroot.go:174] setting up certificates
	I0229 18:32:33.302099   41533 provision.go:83] configureAuth start
	I0229 18:32:33.302108   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:32:33.302405   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:32:33.305380   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.305724   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.305752   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.305897   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.308247   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.308654   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.308686   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.308884   41533 provision.go:138] copyHostCerts
	I0229 18:32:33.308938   41533 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:32:33.308955   41533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:32:33.309006   41533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:32:33.309083   41533 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:32:33.309091   41533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:32:33.309110   41533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:32:33.309159   41533 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:32:33.309165   41533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:32:33.309182   41533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:32:33.309220   41533 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-561577 san=[192.168.39.66 192.168.39.66 localhost 127.0.0.1 minikube old-k8s-version-561577]
	I0229 18:32:33.409876   41533 provision.go:172] copyRemoteCerts
	I0229 18:32:33.409932   41533 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:32:33.409957   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.412655   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.412973   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.413025   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.413184   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.413392   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.413538   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.413694   41533 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:32:33.501793   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:32:33.531300   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:32:33.559515   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:32:33.588247   41533 provision.go:86] duration metric: configureAuth took 286.134911ms
	I0229 18:32:33.588275   41533 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:32:33.588592   41533 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:32:33.588625   41533 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:32:33.588644   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetURL
	I0229 18:32:33.589871   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | Using libvirt version 6000000
	I0229 18:32:33.592135   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.592452   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.592482   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.592628   41533 main.go:141] libmachine: Docker is up and running!
	I0229 18:32:33.592645   41533 main.go:141] libmachine: Reticulating splines...
	I0229 18:32:33.592652   41533 client.go:171] LocalClient.Create took 28.122245232s
	I0229 18:32:33.592674   41533 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-561577" took 28.122307729s
	I0229 18:32:33.592683   41533 start.go:300] post-start starting for "old-k8s-version-561577" (driver="kvm2")
	I0229 18:32:33.592692   41533 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:32:33.592711   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:33.592934   41533 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:32:33.592957   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.594939   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.595276   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.595312   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.595407   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.595587   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.595778   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.595934   41533 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:32:33.686737   41533 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:32:33.691676   41533 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:32:33.691704   41533 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:32:33.691779   41533 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:32:33.691886   41533 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:32:33.692010   41533 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:32:33.703911   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:32:33.735972   41533 start.go:303] post-start completed in 143.275025ms
	I0229 18:32:33.736029   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetConfigRaw
	I0229 18:32:33.736597   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:32:33.739383   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.739691   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.739720   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.739970   41533 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/config.json ...
	I0229 18:32:33.740184   41533 start.go:128] duration metric: createHost completed in 28.29256691s
	I0229 18:32:33.740214   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.742687   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.743017   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.743044   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.743205   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.743337   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.743478   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.743665   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.743828   41533 main.go:141] libmachine: Using SSH client type: native
	I0229 18:32:33.743998   41533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:32:33.744011   41533 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:32:33.874077   41533 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709231553.849244501
	
	I0229 18:32:33.874104   41533 fix.go:206] guest clock: 1709231553.849244501
	I0229 18:32:33.874114   41533 fix.go:219] Guest: 2024-02-29 18:32:33.849244501 +0000 UTC Remote: 2024-02-29 18:32:33.740199433 +0000 UTC m=+55.408613058 (delta=109.045068ms)
	I0229 18:32:33.874139   41533 fix.go:190] guest clock delta is within tolerance: 109.045068ms
	I0229 18:32:33.874146   41533 start.go:83] releasing machines lock for "old-k8s-version-561577", held for 28.426759225s
	I0229 18:32:33.874170   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:33.874517   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:32:33.877680   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.878084   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.878132   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.878308   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:33.878873   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:33.879032   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:32:33.879113   41533 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:32:33.879165   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.879390   41533 ssh_runner.go:195] Run: cat /version.json
	I0229 18:32:33.879435   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:32:33.882013   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.882238   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.882398   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.882444   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.882660   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:33.882698   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.882724   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:33.882843   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:32:33.882902   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.882986   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:32:33.883113   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.883201   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:32:33.883327   41533 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:32:33.883419   41533 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:32:33.978176   41533 ssh_runner.go:195] Run: systemctl --version
	I0229 18:32:34.004917   41533 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:32:34.012879   41533 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:32:34.012963   41533 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:32:34.032655   41533 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:32:34.032682   41533 start.go:475] detecting cgroup driver to use...
	I0229 18:32:34.032757   41533 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:32:34.071493   41533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:32:34.091159   41533 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:32:34.091227   41533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:32:34.106834   41533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:32:34.128445   41533 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:32:34.257358   41533 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:32:34.421016   41533 docker.go:233] disabling docker service ...
	I0229 18:32:34.421081   41533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:32:34.436631   41533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:32:34.450427   41533 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:32:34.575742   41533 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:32:34.706347   41533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:32:34.721956   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:32:34.742562   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:32:34.753768   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:32:34.764781   41533 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:32:34.764832   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:32:34.777808   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:32:34.790046   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:32:34.802054   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:32:34.815500   41533 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:32:34.828915   41533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:32:34.841046   41533 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:32:34.851817   41533 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:32:34.851892   41533 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:32:34.867456   41533 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:32:34.879693   41533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:32:35.010088   41533 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:32:35.045199   41533 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:32:35.045275   41533 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:32:35.050125   41533 retry.go:31] will retry after 1.183054284s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:32:36.233695   41533 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:32:36.239927   41533 start.go:543] Will wait 60s for crictl version
	I0229 18:32:36.239991   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:36.245131   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:32:36.295121   41533 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:32:36.295198   41533 ssh_runner.go:195] Run: containerd --version
	I0229 18:32:36.330133   41533 ssh_runner.go:195] Run: containerd --version
	I0229 18:32:36.360920   41533 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 18:32:36.362134   41533 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:32:36.364939   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:36.365355   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:32:21 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:32:36.365395   41533 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:32:36.365660   41533 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:32:36.370346   41533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:32:36.384654   41533 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:32:36.384712   41533 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:32:36.423197   41533 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:32:36.423269   41533 ssh_runner.go:195] Run: which lz4
	I0229 18:32:36.428040   41533 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:32:36.432758   41533 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:32:36.432783   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 18:32:38.364114   41533 containerd.go:548] Took 1.936087 seconds to copy over tarball
	I0229 18:32:38.364191   41533 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:32:41.130324   41533 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.766104272s)
	I0229 18:32:41.130355   41533 containerd.go:555] Took 2.766211 seconds to extract the tarball
	I0229 18:32:41.130366   41533 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:32:41.174594   41533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:32:41.300494   41533 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:32:41.330444   41533 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:32:41.371029   41533 retry.go:31] will retry after 228.31915ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:32:41Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:32:41.600528   41533 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:32:41.640570   41533 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:32:41.640596   41533 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:32:41.640640   41533 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:32:41.640687   41533 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:32:41.640706   41533 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:32:41.640712   41533 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:32:41.640687   41533 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:32:41.640788   41533 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:32:41.640895   41533 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:32:41.640790   41533 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:32:41.642135   41533 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:32:41.642382   41533 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:32:41.642397   41533 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:32:41.642442   41533 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:32:41.642445   41533 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:32:41.642382   41533 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:32:41.642462   41533 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:32:41.642512   41533 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:32:41.885884   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 18:32:41.885931   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:41.971528   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 18:32:41.971610   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:41.979347   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 18:32:41.979404   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:41.981291   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 18:32:41.981340   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:41.990072   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 18:32:41.990133   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:42.013641   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 18:32:42.013703   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:42.026166   41533 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 18:32:42.026234   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:42.130500   41533 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:32:42.130558   41533 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:32:42.130626   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:42.944517   41533 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:32:42.944557   41533 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:32:42.944597   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.021340   41533 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.039977157s)
	I0229 18:32:43.021453   41533 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:32:43.021487   41533 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:32:43.021533   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.022329   41533 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.042905944s)
	I0229 18:32:43.022383   41533 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:32:43.022415   41533 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:32:43.022452   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.077496   41533 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.08733833s)
	I0229 18:32:43.077574   41533 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:32:43.077603   41533 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:32:43.077651   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.077830   41533 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.064107783s)
	I0229 18:32:43.077890   41533 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:32:43.077918   41533 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:32:43.077965   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.078418   41533 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.052163563s)
	I0229 18:32:43.078455   41533 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:32:43.078477   41533 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:32:43.078498   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:32:43.078507   41533 ssh_runner.go:195] Run: which crictl
	I0229 18:32:43.078541   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:32:43.078500   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:32:43.078891   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:32:43.083032   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:32:43.102823   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:32:43.234254   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:32:43.234304   41533 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:32:43.234323   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:32:43.234363   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:32:43.234426   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:32:43.239801   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:32:43.239826   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:32:43.280116   41533 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:32:43.507973   41533 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 18:32:43.508042   41533 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:32:43.716186   41533 cache_images.go:92] LoadImages completed in 2.075563631s
	W0229 18:32:43.716272   41533 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0229 18:32:43.716336   41533 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:32:43.768371   41533 cni.go:84] Creating CNI manager for ""
	I0229 18:32:43.768407   41533 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:32:43.768428   41533 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:32:43.768452   41533 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.66 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-561577 NodeName:old-k8s-version-561577 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:32:43.768610   41533 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-561577"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-561577
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.66:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:32:43.768695   41533 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-561577 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:32:43.768758   41533 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:32:43.783891   41533 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:32:43.783977   41533 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:32:43.794631   41533 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0229 18:32:43.818133   41533 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:32:43.844430   41533 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2185 bytes)
	I0229 18:32:43.870243   41533 ssh_runner.go:195] Run: grep 192.168.39.66	control-plane.minikube.internal$ /etc/hosts
	I0229 18:32:43.874866   41533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:32:43.890305   41533 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577 for IP: 192.168.39.66
	I0229 18:32:43.890336   41533 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:43.890463   41533 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:32:43.890527   41533 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:32:43.890608   41533 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.key
	I0229 18:32:43.890626   41533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.crt with IP's: []
	I0229 18:32:44.305849   41533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.crt ...
	I0229 18:32:44.305884   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.crt: {Name:mk22751a1d36ed02f959a4404c6b26c4ec74c17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.306086   41533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.key ...
	I0229 18:32:44.306112   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.key: {Name:mk74b3449a907a51aefdd5e2c94bbfbe7b48cc19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.306242   41533 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key.b02ae5fc
	I0229 18:32:44.306262   41533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt.b02ae5fc with IP's: [192.168.39.66 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:32:44.450174   41533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt.b02ae5fc ...
	I0229 18:32:44.450209   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt.b02ae5fc: {Name:mk1e6178b56044ef5cae53646e4821badd7e335e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.450411   41533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key.b02ae5fc ...
	I0229 18:32:44.450434   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key.b02ae5fc: {Name:mkec6fdb6bcbd3f3e919e5a8f601248934c594e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.450540   41533 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt.b02ae5fc -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt
	I0229 18:32:44.450685   41533 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key.b02ae5fc -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key
	I0229 18:32:44.450771   41533 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key
	I0229 18:32:44.450794   41533 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.crt with IP's: []
	I0229 18:32:44.642244   41533 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.crt ...
	I0229 18:32:44.642286   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.crt: {Name:mk57bf6e0cb56441997b715a276151098a05be2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.642452   41533 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key ...
	I0229 18:32:44.642465   41533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key: {Name:mk56d36289d9743a39c8f1eb27707c4c982d23fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:32:44.642656   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:32:44.642700   41533 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:32:44.642713   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:32:44.642743   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:32:44.642767   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:32:44.642790   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:32:44.642826   41533 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:32:44.643374   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:32:44.674099   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:32:44.705440   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:32:44.732793   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:32:44.769614   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:32:44.800626   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:32:44.834724   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:32:44.865920   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:32:44.894557   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:32:44.925721   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:32:44.955195   41533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:32:44.989196   41533 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:32:45.009160   41533 ssh_runner.go:195] Run: openssl version
	I0229 18:32:45.016274   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:32:45.030056   41533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:32:45.035268   41533 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:32:45.035327   41533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:32:45.042007   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:32:45.054785   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:32:45.067200   41533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:32:45.072254   41533 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:32:45.072309   41533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:32:45.078944   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:32:45.091894   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:32:45.109276   41533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:32:45.115742   41533 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:32:45.115796   41533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:32:45.124170   41533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:32:45.141576   41533 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:32:45.146892   41533 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:32:45.146947   41533 kubeadm.go:404] StartCluster: {Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:32:45.147037   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:32:45.147115   41533 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:32:45.203687   41533 cri.go:89] found id: ""
	I0229 18:32:45.203749   41533 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:32:45.216835   41533 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:32:45.229136   41533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:32:45.240432   41533 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:32:45.240480   41533 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:32:45.359937   41533 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:32:45.360066   41533 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:32:45.617411   41533 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:32:45.617603   41533 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:32:45.617752   41533 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:32:45.899192   41533 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:32:45.901554   41533 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:32:45.915323   41533 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:32:46.069942   41533 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:32:46.072453   41533 out.go:204]   - Generating certificates and keys ...
	I0229 18:32:46.072576   41533 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:32:46.072698   41533 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:32:46.240117   41533 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:32:46.356404   41533 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:32:46.605636   41533 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:32:46.716709   41533 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:32:46.907844   41533 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:32:46.908284   41533 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I0229 18:32:47.147521   41533 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:32:47.147898   41533 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	I0229 18:32:47.273591   41533 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:32:47.338991   41533 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:32:47.636139   41533 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:32:47.637176   41533 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:32:47.804096   41533 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:32:48.063778   41533 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:32:48.238381   41533 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:32:48.442695   41533 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:32:48.444324   41533 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:32:48.446516   41533 out.go:204]   - Booting up control plane ...
	I0229 18:32:48.446675   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:32:48.456838   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:32:48.458497   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:32:48.459368   41533 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:32:48.462011   41533 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:33:28.458440   41533 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:33:28.458575   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:28.458863   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:33.458756   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:33.459041   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:33:43.458478   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:33:43.458827   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:34:03.459045   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:34:03.459329   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:34:43.461221   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:34:43.461476   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:34:43.461512   41533 kubeadm.go:322] 
	I0229 18:34:43.461575   41533 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:34:43.461743   41533 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:34:43.461760   41533 kubeadm.go:322] 
	I0229 18:34:43.461802   41533 kubeadm.go:322] This error is likely caused by:
	I0229 18:34:43.461868   41533 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:34:43.462009   41533 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:34:43.462020   41533 kubeadm.go:322] 
	I0229 18:34:43.462152   41533 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:34:43.462196   41533 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:34:43.462253   41533 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:34:43.462261   41533 kubeadm.go:322] 
	I0229 18:34:43.462395   41533 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:34:43.462558   41533 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:34:43.462673   41533 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:34:43.462745   41533 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:34:43.462836   41533 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:34:43.462876   41533 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:34:43.464180   41533 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:34:43.464302   41533 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:34:43.464404   41533 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:34:43.464543   41533 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-561577 localhost] and IPs [192.168.39.66 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:34:43.464604   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 18:34:43.941001   41533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:34:43.956091   41533 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:34:43.966793   41533 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:34:43.966840   41533 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:34:44.027760   41533 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:34:44.027837   41533 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:34:44.169758   41533 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:34:44.169916   41533 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:34:44.170051   41533 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:34:44.378519   41533 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:34:44.379512   41533 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:34:44.387865   41533 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:34:44.512904   41533 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:34:44.514771   41533 out.go:204]   - Generating certificates and keys ...
	I0229 18:34:44.514895   41533 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:34:44.519321   41533 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:34:44.519522   41533 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:34:44.519695   41533 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:34:44.519877   41533 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:34:44.520001   41533 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:34:44.520185   41533 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:34:44.520350   41533 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:34:44.520564   41533 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:34:44.520766   41533 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:34:44.520863   41533 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:34:44.521001   41533 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:34:44.848909   41533 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:34:44.981812   41533 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:34:45.229575   41533 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:34:45.404582   41533 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:34:45.405539   41533 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:34:45.409097   41533 out.go:204]   - Booting up control plane ...
	I0229 18:34:45.409218   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:34:45.413929   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:34:45.414979   41533 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:34:45.415845   41533 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:34:45.418702   41533 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:35:25.421804   41533 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:35:25.422481   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:25.422733   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:30.423665   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:30.423915   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:40.424299   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:40.424522   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:00.424143   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:36:00.424391   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:40.424351   41533 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:36:40.424578   41533 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:40.424599   41533 kubeadm.go:322] 
	I0229 18:36:40.424651   41533 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:36:40.424699   41533 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:36:40.424708   41533 kubeadm.go:322] 
	I0229 18:36:40.424756   41533 kubeadm.go:322] This error is likely caused by:
	I0229 18:36:40.424790   41533 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:36:40.424950   41533 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:36:40.424967   41533 kubeadm.go:322] 
	I0229 18:36:40.425115   41533 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:36:40.425166   41533 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:36:40.425205   41533 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:36:40.425216   41533 kubeadm.go:322] 
	I0229 18:36:40.425357   41533 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:36:40.425467   41533 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:36:40.425583   41533 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:36:40.425654   41533 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:36:40.425765   41533 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:36:40.425813   41533 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:36:40.426648   41533 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:36:40.426768   41533 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:36:40.426891   41533 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:36:40.426946   41533 kubeadm.go:406] StartCluster complete in 3m55.2800036s
	I0229 18:36:40.426993   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:36:40.427055   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:36:40.477588   41533 cri.go:89] found id: ""
	I0229 18:36:40.477611   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.477624   41533 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:36:40.477629   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:36:40.477681   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:36:40.515615   41533 cri.go:89] found id: ""
	I0229 18:36:40.515638   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.515653   41533 logs.go:278] No container was found matching "etcd"
	I0229 18:36:40.515661   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:36:40.515739   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:36:40.561137   41533 cri.go:89] found id: ""
	I0229 18:36:40.561171   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.561182   41533 logs.go:278] No container was found matching "coredns"
	I0229 18:36:40.561193   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:36:40.561249   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:36:40.608424   41533 cri.go:89] found id: ""
	I0229 18:36:40.608452   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.608461   41533 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:36:40.608467   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:36:40.608517   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:36:40.664528   41533 cri.go:89] found id: ""
	I0229 18:36:40.664557   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.664568   41533 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:36:40.664576   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:36:40.664630   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:36:40.707656   41533 cri.go:89] found id: ""
	I0229 18:36:40.707684   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.707696   41533 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:36:40.707706   41533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:36:40.707777   41533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:36:40.745670   41533 cri.go:89] found id: ""
	I0229 18:36:40.745700   41533 logs.go:276] 0 containers: []
	W0229 18:36:40.745711   41533 logs.go:278] No container was found matching "kindnet"
	I0229 18:36:40.745724   41533 logs.go:123] Gathering logs for containerd ...
	I0229 18:36:40.745737   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:36:40.779255   41533 logs.go:123] Gathering logs for container status ...
	I0229 18:36:40.779284   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:36:40.825429   41533 logs.go:123] Gathering logs for kubelet ...
	I0229 18:36:40.825457   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:36:40.873415   41533 logs.go:123] Gathering logs for dmesg ...
	I0229 18:36:40.873452   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:36:40.890876   41533 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:36:40.890902   41533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:36:41.022194   41533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0229 18:36:41.022233   41533 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:36:41.022276   41533 out.go:239] * 
	* 
	W0229 18:36:41.022335   41533 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:36:41.022361   41533 out.go:239] * 
	* 
	W0229 18:36:41.023506   41533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:36:41.026891   41533 out.go:177] 
	W0229 18:36:41.028113   41533 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:36:41.028153   41533 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:36:41.028170   41533 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:36:41.029630   41533 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 6 (252.889664ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:36:41.309587   44148 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-561577" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-561577" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (303.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-561577 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-561577 create -f testdata/busybox.yaml: exit status 1 (46.552511ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-561577" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-561577 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 6 (233.393746ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:36:41.593415   44188 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-561577" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-561577" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 6 (238.712051ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:36:41.830358   44218 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-561577" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-561577" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-561577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-561577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m19.04562436s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-561577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-561577 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-561577 describe deploy/metrics-server -n kube-system: exit status 1 (45.951967ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-561577" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-561577 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 6 (235.587136ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:38:01.159737   45119 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-561577" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-561577" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (79.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (521.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 109 (8m39.774074756s)

                                                
                                                
-- stdout --
	* [old-k8s-version-561577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-561577 in cluster old-k8s-version-561577
	* Restarting existing kvm2 VM for "old-k8s-version-561577" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:38:02.798470   45244 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:38:02.798634   45244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:38:02.798645   45244 out.go:304] Setting ErrFile to fd 2...
	I0229 18:38:02.798650   45244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:38:02.798861   45244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:38:02.799384   45244 out.go:298] Setting JSON to false
	I0229 18:38:02.800279   45244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4824,"bootTime":1709227059,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:38:02.800342   45244 start.go:139] virtualization: kvm guest
	I0229 18:38:02.802639   45244 out.go:177] * [old-k8s-version-561577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:38:02.804076   45244 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:38:02.805357   45244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:38:02.804128   45244 notify.go:220] Checking for updates...
	I0229 18:38:02.807814   45244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:38:02.809111   45244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:38:02.810380   45244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:38:02.811795   45244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:38:02.813618   45244 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:38:02.813972   45244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:38:02.814011   45244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:38:02.829112   45244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I0229 18:38:02.829512   45244 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:38:02.830009   45244 main.go:141] libmachine: Using API Version  1
	I0229 18:38:02.830030   45244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:38:02.830327   45244 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:38:02.830500   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:02.832392   45244 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 18:38:02.833804   45244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:38:02.834207   45244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:38:02.834247   45244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:38:02.848579   45244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I0229 18:38:02.849007   45244 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:38:02.849442   45244 main.go:141] libmachine: Using API Version  1
	I0229 18:38:02.849464   45244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:38:02.849777   45244 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:38:02.849968   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:02.885981   45244 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:38:02.887510   45244 start.go:299] selected driver: kvm2
	I0229 18:38:02.887528   45244 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:38:02.887640   45244 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:38:02.888435   45244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:38:02.888513   45244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:38:02.904126   45244 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:38:02.904532   45244 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:38:02.904602   45244 cni.go:84] Creating CNI manager for ""
	I0229 18:38:02.904620   45244 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:38:02.904634   45244 start_flags.go:323] config:
	{Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:38:02.904815   45244 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:38:02.906895   45244 out.go:177] * Starting control plane node old-k8s-version-561577 in cluster old-k8s-version-561577
	I0229 18:38:02.908213   45244 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:38:02.908252   45244 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 18:38:02.908357   45244 cache.go:56] Caching tarball of preloaded images
	I0229 18:38:02.908498   45244 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:38:02.908532   45244 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 18:38:02.908685   45244 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/config.json ...
	I0229 18:38:02.908950   45244 start.go:365] acquiring machines lock for old-k8s-version-561577: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:38:02.909026   45244 start.go:369] acquired machines lock for "old-k8s-version-561577" in 40.44µs
	I0229 18:38:02.909048   45244 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:38:02.909056   45244 fix.go:54] fixHost starting: 
	I0229 18:38:02.909454   45244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:38:02.909500   45244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:38:02.924832   45244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0229 18:38:02.925303   45244 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:38:02.925823   45244 main.go:141] libmachine: Using API Version  1
	I0229 18:38:02.925840   45244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:38:02.926265   45244 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:38:02.926507   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:02.926682   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetState
	I0229 18:38:02.928518   45244 fix.go:102] recreateIfNeeded on old-k8s-version-561577: state=Stopped err=<nil>
	I0229 18:38:02.928553   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	W0229 18:38:02.928708   45244 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:38:02.930801   45244 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-561577" ...
	I0229 18:38:02.932097   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .Start
	I0229 18:38:02.932262   45244 main.go:141] libmachine: (old-k8s-version-561577) Ensuring networks are active...
	I0229 18:38:02.933080   45244 main.go:141] libmachine: (old-k8s-version-561577) Ensuring network default is active
	I0229 18:38:02.933544   45244 main.go:141] libmachine: (old-k8s-version-561577) Ensuring network mk-old-k8s-version-561577 is active
	I0229 18:38:02.933983   45244 main.go:141] libmachine: (old-k8s-version-561577) Getting domain xml...
	I0229 18:38:02.934849   45244 main.go:141] libmachine: (old-k8s-version-561577) Creating domain...
	I0229 18:38:04.256392   45244 main.go:141] libmachine: (old-k8s-version-561577) Waiting to get IP...
	I0229 18:38:04.257377   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:04.257848   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:04.257905   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:04.257824   45279 retry.go:31] will retry after 295.712821ms: waiting for machine to come up
	I0229 18:38:04.555540   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:04.556159   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:04.556183   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:04.556089   45279 retry.go:31] will retry after 390.17693ms: waiting for machine to come up
	I0229 18:38:04.947758   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:04.948357   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:04.948390   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:04.948316   45279 retry.go:31] will retry after 311.167039ms: waiting for machine to come up
	I0229 18:38:05.260852   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:05.261339   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:05.261368   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:05.261291   45279 retry.go:31] will retry after 468.447055ms: waiting for machine to come up
	I0229 18:38:05.731050   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:05.731605   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:05.731639   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:05.731571   45279 retry.go:31] will retry after 468.240151ms: waiting for machine to come up
	I0229 18:38:06.201159   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:06.201676   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:06.201725   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:06.201626   45279 retry.go:31] will retry after 950.988062ms: waiting for machine to come up
	I0229 18:38:07.153864   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:07.154377   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:07.154409   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:07.154329   45279 retry.go:31] will retry after 953.809035ms: waiting for machine to come up
	I0229 18:38:08.109305   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:08.109889   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:08.109914   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:08.109837   45279 retry.go:31] will retry after 1.022613378s: waiting for machine to come up
	I0229 18:38:09.134077   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:09.134533   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:09.134569   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:09.134496   45279 retry.go:31] will retry after 1.284169809s: waiting for machine to come up
	I0229 18:38:10.421086   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:10.421613   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:10.421642   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:10.421561   45279 retry.go:31] will retry after 1.568097925s: waiting for machine to come up
	I0229 18:38:11.991765   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:11.992304   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:11.992333   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:11.992248   45279 retry.go:31] will retry after 1.767621015s: waiting for machine to come up
	I0229 18:38:13.761363   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:13.761982   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:13.762013   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:13.761918   45279 retry.go:31] will retry after 2.451143228s: waiting for machine to come up
	I0229 18:38:16.215811   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:16.216330   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:16.216362   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:16.216282   45279 retry.go:31] will retry after 3.415696347s: waiting for machine to come up
	I0229 18:38:19.634764   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:19.635150   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | unable to find current IP address of domain old-k8s-version-561577 in network mk-old-k8s-version-561577
	I0229 18:38:19.635177   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | I0229 18:38:19.635109   45279 retry.go:31] will retry after 4.271782388s: waiting for machine to come up
	I0229 18:38:23.908193   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:23.908643   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has current primary IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:23.908667   45244 main.go:141] libmachine: (old-k8s-version-561577) Found IP for machine: 192.168.39.66
	I0229 18:38:23.908680   45244 main.go:141] libmachine: (old-k8s-version-561577) Reserving static IP address...
	I0229 18:38:23.909064   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "old-k8s-version-561577", mac: "52:54:00:88:1b:5b", ip: "192.168.39.66"} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:23.909085   45244 main.go:141] libmachine: (old-k8s-version-561577) Reserved static IP address: 192.168.39.66
	I0229 18:38:23.909103   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | skip adding static IP to network mk-old-k8s-version-561577 - found existing host DHCP lease matching {name: "old-k8s-version-561577", mac: "52:54:00:88:1b:5b", ip: "192.168.39.66"}
	I0229 18:38:23.909139   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | Getting to WaitForSSH function...
	I0229 18:38:23.909158   45244 main.go:141] libmachine: (old-k8s-version-561577) Waiting for SSH to be available...
	I0229 18:38:23.911422   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:23.911739   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:23.911777   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:23.911915   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | Using SSH client type: external
	I0229 18:38:23.911942   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa (-rw-------)
	I0229 18:38:23.911993   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:38:23.912013   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | About to run SSH command:
	I0229 18:38:23.912029   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | exit 0
	I0229 18:38:24.044223   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | SSH cmd err, output: <nil>: 
	I0229 18:38:24.044580   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetConfigRaw
	I0229 18:38:24.045192   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:38:24.047868   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.048230   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.048270   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.048531   45244 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/config.json ...
	I0229 18:38:24.048729   45244 machine.go:88] provisioning docker machine ...
	I0229 18:38:24.048752   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.048928   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:38:24.049119   45244 buildroot.go:166] provisioning hostname "old-k8s-version-561577"
	I0229 18:38:24.049138   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:38:24.049282   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.051607   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.052007   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.052032   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.052182   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.052320   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.052453   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.052551   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.052678   45244 main.go:141] libmachine: Using SSH client type: native
	I0229 18:38:24.052877   45244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:38:24.052894   45244 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-561577 && echo "old-k8s-version-561577" | sudo tee /etc/hostname
	I0229 18:38:24.177699   45244 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-561577
	
	I0229 18:38:24.177735   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.180920   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.181280   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.181325   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.181502   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.181699   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.181865   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.182020   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.182211   45244 main.go:141] libmachine: Using SSH client type: native
	I0229 18:38:24.182373   45244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:38:24.182389   45244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-561577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-561577/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-561577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:38:24.302134   45244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:38:24.302164   45244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:38:24.302221   45244 buildroot.go:174] setting up certificates
	I0229 18:38:24.302234   45244 provision.go:83] configureAuth start
	I0229 18:38:24.302251   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetMachineName
	I0229 18:38:24.302553   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:38:24.305177   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.305553   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.305580   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.305688   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.307984   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.308324   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.308353   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.308491   45244 provision.go:138] copyHostCerts
	I0229 18:38:24.308548   45244 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:38:24.308566   45244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:38:24.308633   45244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:38:24.308746   45244 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:38:24.308754   45244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:38:24.308789   45244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:38:24.308840   45244 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:38:24.308847   45244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:38:24.308867   45244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:38:24.308912   45244 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-561577 san=[192.168.39.66 192.168.39.66 localhost 127.0.0.1 minikube old-k8s-version-561577]
	I0229 18:38:24.495245   45244 provision.go:172] copyRemoteCerts
	I0229 18:38:24.495297   45244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:38:24.495329   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.497933   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.498247   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.498296   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.498488   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.498703   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.498872   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.498986   45244 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:38:24.586748   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:38:24.614805   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:38:24.642066   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:38:24.669476   45244 provision.go:86] duration metric: configureAuth took 367.228925ms
	I0229 18:38:24.669503   45244 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:38:24.669696   45244 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:38:24.669718   45244 machine.go:91] provisioned docker machine in 620.966899ms
	I0229 18:38:24.669735   45244 start.go:300] post-start starting for "old-k8s-version-561577" (driver="kvm2")
	I0229 18:38:24.669750   45244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:38:24.669778   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.670096   45244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:38:24.670122   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.672698   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.673069   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.673098   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.673281   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.673459   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.673609   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.673744   45244 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:38:24.758988   45244 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:38:24.763903   45244 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:38:24.763929   45244 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:38:24.763992   45244 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:38:24.764061   45244 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:38:24.764154   45244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:38:24.774210   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:38:24.802372   45244 start.go:303] post-start completed in 132.620244ms
	I0229 18:38:24.802400   45244 fix.go:56] fixHost completed within 21.893343979s
	I0229 18:38:24.802425   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.804832   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.805208   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.805238   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.805395   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.805579   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.805751   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.805879   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.806027   45244 main.go:141] libmachine: Using SSH client type: native
	I0229 18:38:24.806185   45244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0229 18:38:24.806195   45244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:38:24.916147   45244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709231904.863843470
	
	I0229 18:38:24.916171   45244 fix.go:206] guest clock: 1709231904.863843470
	I0229 18:38:24.916177   45244 fix.go:219] Guest: 2024-02-29 18:38:24.86384347 +0000 UTC Remote: 2024-02-29 18:38:24.802404385 +0000 UTC m=+22.051434618 (delta=61.439085ms)
	I0229 18:38:24.916211   45244 fix.go:190] guest clock delta is within tolerance: 61.439085ms
	I0229 18:38:24.916216   45244 start.go:83] releasing machines lock for "old-k8s-version-561577", held for 22.00717794s
	I0229 18:38:24.916241   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.916500   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:38:24.919203   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.919579   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.919623   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.919738   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.920228   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.920430   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .DriverName
	I0229 18:38:24.920542   45244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:38:24.920584   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.920626   45244 ssh_runner.go:195] Run: cat /version.json
	I0229 18:38:24.920651   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHHostname
	I0229 18:38:24.923338   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.923368   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.923711   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.923736   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.923891   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.923893   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:24.923966   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:24.924080   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.924110   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHPort
	I0229 18:38:24.924281   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHKeyPath
	I0229 18:38:24.924288   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.924447   45244 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:38:24.924459   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetSSHUsername
	I0229 18:38:24.924620   45244 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/old-k8s-version-561577/id_rsa Username:docker}
	I0229 18:38:25.004120   45244 ssh_runner.go:195] Run: systemctl --version
	I0229 18:38:25.030652   45244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:38:25.037349   45244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:38:25.037400   45244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:38:25.057068   45244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:38:25.057092   45244 start.go:475] detecting cgroup driver to use...
	I0229 18:38:25.057152   45244 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:38:25.084861   45244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:38:25.100179   45244 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:38:25.100232   45244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:38:25.115506   45244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:38:25.129964   45244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:38:25.246530   45244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:38:25.396227   45244 docker.go:233] disabling docker service ...
	I0229 18:38:25.396308   45244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:38:25.414410   45244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:38:25.428238   45244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:38:25.579736   45244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:38:25.720241   45244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:38:25.739046   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:38:25.762516   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:38:25.778441   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:38:25.790432   45244 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:38:25.790493   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:38:25.802228   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:38:25.815450   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:38:25.828287   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:38:25.839604   45244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:38:25.851758   45244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:38:25.864024   45244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:38:25.874675   45244 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:38:25.874738   45244 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:38:25.889437   45244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:38:25.901966   45244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:38:26.056055   45244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:38:26.091468   45244 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:38:26.091543   45244 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:38:26.097621   45244 retry.go:31] will retry after 562.540427ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:38:26.660340   45244 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:38:26.667910   45244 start.go:543] Will wait 60s for crictl version
	I0229 18:38:26.667966   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:26.673713   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:38:26.716810   45244 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:38:26.716906   45244 ssh_runner.go:195] Run: containerd --version
	I0229 18:38:26.747925   45244 ssh_runner.go:195] Run: containerd --version
	I0229 18:38:26.780895   45244 out.go:177] * Preparing Kubernetes v1.16.0 on containerd 1.7.11 ...
	I0229 18:38:26.782155   45244 main.go:141] libmachine: (old-k8s-version-561577) Calling .GetIP
	I0229 18:38:26.785135   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:26.785558   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:1b:5b", ip: ""} in network mk-old-k8s-version-561577: {Iface:virbr3 ExpiryTime:2024-02-29 19:38:15 +0000 UTC Type:0 Mac:52:54:00:88:1b:5b Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:old-k8s-version-561577 Clientid:01:52:54:00:88:1b:5b}
	I0229 18:38:26.785585   45244 main.go:141] libmachine: (old-k8s-version-561577) DBG | domain old-k8s-version-561577 has defined IP address 192.168.39.66 and MAC address 52:54:00:88:1b:5b in network mk-old-k8s-version-561577
	I0229 18:38:26.785796   45244 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 18:38:26.790793   45244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:38:26.806967   45244 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 18:38:26.807030   45244 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:38:26.846995   45244 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:38:26.847062   45244 ssh_runner.go:195] Run: which lz4
	I0229 18:38:26.851821   45244 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:38:26.856889   45244 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:38:26.856925   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (440628646 bytes)
	I0229 18:38:28.787949   45244 containerd.go:548] Took 1.936158 seconds to copy over tarball
	I0229 18:38:28.788051   45244 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:38:31.751429   45244 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.963337111s)
	I0229 18:38:31.751467   45244 containerd.go:555] Took 2.963479 seconds to extract the tarball
	I0229 18:38:31.751478   45244 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:38:31.811826   45244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:38:31.936512   45244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:38:31.970633   45244 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:38:32.010330   45244 retry.go:31] will retry after 234.783814ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:38:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:38:32.245819   45244 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:38:32.293682   45244 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 18:38:32.293709   45244 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:38:32.293796   45244 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:38:32.293838   45244 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:38:32.293839   45244 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:38:32.293803   45244 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:38:32.293774   45244 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:38:32.293813   45244 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:38:32.293890   45244 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:38:32.293836   45244 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:38:32.295640   45244 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:38:32.295656   45244 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:38:32.295657   45244 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:38:32.295668   45244 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:38:32.295640   45244 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:38:32.295700   45244 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:38:32.295730   45244 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:38:32.295809   45244 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:38:32.450523   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.2" and sha "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b"
	I0229 18:38:32.450605   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.487932   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.1" and sha "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
	I0229 18:38:32.488008   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.624526   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.3.15-0" and sha "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed"
	I0229 18:38:32.624600   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.634420   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.16.0" and sha "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d"
	I0229 18:38:32.634486   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.639642   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.16.0" and sha "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e"
	I0229 18:38:32.639706   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.651908   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.16.0" and sha "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384"
	I0229 18:38:32.651969   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:32.863017   45244 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:38:32.896008   45244 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:38:32.896060   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:32.918496   45244 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:38:32.918553   45244 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 18:38:32.918601   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:33.586695   45244 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:38:33.586740   45244 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:38:33.586786   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:33.694222   45244 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.059715024s)
	I0229 18:38:33.694275   45244 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:38:33.694306   45244 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:38:33.694360   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:33.694821   45244 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.055094271s)
	I0229 18:38:33.694864   45244 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:38:33.694884   45244 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:38:33.694941   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:33.695354   45244 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.04337077s)
	I0229 18:38:33.695391   45244 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:38:33.695409   45244 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:38:33.695440   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:33.701424   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 18:38:33.701477   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:38:33.701502   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 18:38:33.701454   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:38:33.701548   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:38:33.704772   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:38:33.848317   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:38:33.848414   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:38:33.848489   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:38:33.848519   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:38:33.848568   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:38:33.848612   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:38:34.092635   45244 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0229 18:38:34.092710   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:34.638307   45244 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.16.0" and sha "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a"
	I0229 18:38:34.638380   45244 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0229 18:38:34.873934   45244 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:38:34.873978   45244 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:38:34.874026   45244 ssh_runner.go:195] Run: which crictl
	I0229 18:38:34.879300   45244 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:38:34.918124   45244 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:38:34.918184   45244 cache_images.go:92] LoadImages completed in 2.62445883s
	W0229 18:38:34.918274   45244 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0229 18:38:34.918334   45244 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:38:34.961548   45244 cni.go:84] Creating CNI manager for ""
	I0229 18:38:34.961572   45244 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 18:38:34.961589   45244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:38:34.961604   45244 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.66 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-561577 NodeName:old-k8s-version-561577 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:38:34.961729   45244 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-561577"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-561577
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.66:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:38:34.961810   45244 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-561577 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:38:34.961875   45244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:38:34.975086   45244 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:38:34.975156   45244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:38:34.988893   45244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0229 18:38:35.011609   45244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:38:35.032742   45244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2185 bytes)
	I0229 18:38:35.060745   45244 ssh_runner.go:195] Run: grep 192.168.39.66	control-plane.minikube.internal$ /etc/hosts
	I0229 18:38:35.066519   45244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:38:35.086074   45244 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577 for IP: 192.168.39.66
	I0229 18:38:35.086113   45244 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:38:35.086279   45244 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:38:35.086330   45244 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:38:35.086434   45244 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/client.key
	I0229 18:38:35.086494   45244 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key.b02ae5fc
	I0229 18:38:35.086569   45244 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key
	I0229 18:38:35.086707   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:38:35.086746   45244 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:38:35.086770   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:38:35.086806   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:38:35.086837   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:38:35.086865   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:38:35.086923   45244 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:38:35.087611   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:38:35.119533   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:38:35.153355   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:38:35.182479   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/old-k8s-version-561577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:38:35.215401   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:38:35.246539   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:38:35.278927   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:38:35.317257   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:38:35.349812   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:38:35.384641   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:38:35.413637   45244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:38:35.444824   45244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:38:35.466328   45244 ssh_runner.go:195] Run: openssl version
	I0229 18:38:35.476345   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:38:35.489746   45244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:38:35.495784   45244 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:38:35.495853   45244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:38:35.502903   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:38:35.516873   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:38:35.534336   45244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:38:35.541396   45244 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:38:35.541453   45244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:38:35.549969   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:38:35.563446   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:38:35.577839   45244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:38:35.583265   45244 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:38:35.583321   45244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:38:35.589768   45244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:38:35.603996   45244 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:38:35.611181   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:38:35.617937   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:38:35.624953   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:38:35.631933   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:38:35.638869   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:38:35.646213   45244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:38:35.652925   45244 kubeadm.go:404] StartCluster: {Name:old-k8s-version-561577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-561577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.66 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:38:35.653116   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:38:35.653183   45244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:38:35.696757   45244 cri.go:89] found id: ""
	I0229 18:38:35.696847   45244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:38:35.709830   45244 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:38:35.709854   45244 kubeadm.go:636] restartCluster start
	I0229 18:38:35.709909   45244 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:38:35.722265   45244 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:35.722939   45244 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-561577" does not appear in /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:38:35.723255   45244 kubeconfig.go:146] "old-k8s-version-561577" context is missing from /home/jenkins/minikube-integration/18259-6412/kubeconfig - will repair!
	I0229 18:38:35.723772   45244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:38:35.725104   45244 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:38:35.737192   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:35.737246   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:35.753227   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:36.237538   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:36.237657   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:36.258527   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:36.738122   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:36.738191   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:36.754591   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:37.238027   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:37.238138   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:37.255229   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:37.737935   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:37.738070   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:37.754950   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:38.238308   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:38.238391   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:38.256593   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:38.738176   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:38.738260   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:38.755157   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:39.237608   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:39.237693   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:39.255455   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:39.738043   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:39.738141   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:39.753596   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:40.237814   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:40.237888   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:40.253366   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:40.737712   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:40.737799   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:40.752486   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:41.238092   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:41.238168   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:41.253335   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:41.737627   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:41.737696   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:41.753346   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:42.237981   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:42.238110   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:42.254605   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:42.737379   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:42.737446   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:42.753047   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:43.237261   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:43.237333   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:43.252601   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:43.738182   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:43.738277   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:43.753519   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:44.237886   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:44.237944   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:44.254455   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:44.738057   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:44.738144   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:44.753005   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:45.238215   45244 api_server.go:166] Checking apiserver status ...
	I0229 18:38:45.238289   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:38:45.260506   45244 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:38:45.737216   45244 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:38:45.737252   45244 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:38:45.737265   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0229 18:38:45.737357   45244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:38:45.789661   45244 cri.go:89] found id: ""
	I0229 18:38:45.789751   45244 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:38:45.817464   45244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:38:45.832445   45244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:38:45.832523   45244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:38:45.846175   45244 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:38:45.846203   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:38:45.973275   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:38:47.033542   45244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060231971s)
	I0229 18:38:47.033595   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:38:47.280661   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:38:47.391723   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:38:47.497138   45244 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:38:47.497217   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:47.997303   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:48.497373   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:48.997541   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:49.498067   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:49.998070   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:50.497995   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:50.997493   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:51.497339   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:51.997915   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:52.497517   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:52.997614   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:53.498115   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:53.997855   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:54.498129   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:54.997786   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:55.497324   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:55.998001   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:56.497802   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:56.998059   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:57.497535   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:57.997947   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:58.497387   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:58.997274   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:59.497314   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:38:59.998400   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:00.497480   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:00.997377   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:01.497316   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:01.998066   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:02.497497   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:02.997932   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:03.497550   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:03.997868   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:04.497590   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:04.998192   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:05.497625   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:05.997438   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:06.497405   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:06.997344   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:07.497352   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:07.998103   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:08.497458   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:08.998128   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:09.497548   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:09.997765   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:10.497292   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:10.998119   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:11.497372   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:11.997858   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:12.498285   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:12.997906   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:13.498114   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:13.998221   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:14.497554   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:14.997915   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:15.497390   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:15.998354   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:16.497441   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:16.998171   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:17.497770   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:17.998297   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:18.497816   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:18.997338   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:19.497545   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:19.998214   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:20.497526   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:20.997942   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:21.498065   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:21.997558   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:22.497533   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:22.997808   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:23.497932   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:23.998032   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:24.497617   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:24.997369   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:25.497465   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:25.997650   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:26.497510   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:26.998100   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:27.498249   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:27.998162   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:28.497370   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:28.997848   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:29.498179   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:29.998053   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:30.497473   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:30.997930   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:31.497876   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:31.998013   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:32.497527   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:32.997322   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:33.497752   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:33.998123   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:34.497406   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:34.997394   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:35.497704   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:35.998009   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:36.497348   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:36.998273   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:37.498006   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:37.997991   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:38.497414   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:38.997541   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:39.497502   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:39.997480   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:40.497504   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:40.997433   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:41.497411   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:41.997582   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:42.497499   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:42.998168   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:43.498286   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:43.997309   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:44.497915   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:44.997310   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:45.497948   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:45.997302   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:46.498071   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:46.997993   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:47.497924   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:39:47.498005   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:39:47.542296   45244 cri.go:89] found id: ""
	I0229 18:39:47.542322   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.542331   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:39:47.542338   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:39:47.542403   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:39:47.579896   45244 cri.go:89] found id: ""
	I0229 18:39:47.579917   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.579924   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:39:47.579930   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:39:47.579987   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:39:47.620100   45244 cri.go:89] found id: ""
	I0229 18:39:47.620126   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.620137   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:39:47.620167   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:39:47.620237   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:39:47.664016   45244 cri.go:89] found id: ""
	I0229 18:39:47.664041   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.664050   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:39:47.664054   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:39:47.664110   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:39:47.709194   45244 cri.go:89] found id: ""
	I0229 18:39:47.709225   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.709243   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:39:47.709250   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:39:47.709302   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:39:47.760123   45244 cri.go:89] found id: ""
	I0229 18:39:47.760146   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.760154   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:39:47.760159   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:39:47.760214   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:39:47.804107   45244 cri.go:89] found id: ""
	I0229 18:39:47.804127   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.804135   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:39:47.804140   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:39:47.804194   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:39:47.846999   45244 cri.go:89] found id: ""
	I0229 18:39:47.847030   45244 logs.go:276] 0 containers: []
	W0229 18:39:47.847041   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:39:47.847051   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:39:47.847064   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:39:47.898130   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:39:47.898160   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:39:47.912781   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:39:47.912812   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:39:48.047724   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:39:48.047747   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:39:48.047762   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:39:48.082480   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:39:48.082506   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:39:50.648537   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:50.663126   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:39:50.663190   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:39:50.699619   45244 cri.go:89] found id: ""
	I0229 18:39:50.699642   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.699650   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:39:50.699655   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:39:50.699702   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:39:50.738248   45244 cri.go:89] found id: ""
	I0229 18:39:50.738286   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.738295   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:39:50.738300   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:39:50.738348   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:39:50.779291   45244 cri.go:89] found id: ""
	I0229 18:39:50.779314   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.779338   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:39:50.779343   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:39:50.779404   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:39:50.819421   45244 cri.go:89] found id: ""
	I0229 18:39:50.819453   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.819463   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:39:50.819470   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:39:50.819549   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:39:50.856877   45244 cri.go:89] found id: ""
	I0229 18:39:50.856903   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.856911   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:39:50.856916   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:39:50.856963   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:39:50.893195   45244 cri.go:89] found id: ""
	I0229 18:39:50.893219   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.893226   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:39:50.893235   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:39:50.893288   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:39:50.934224   45244 cri.go:89] found id: ""
	I0229 18:39:50.934251   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.934260   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:39:50.934266   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:39:50.934313   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:39:50.975518   45244 cri.go:89] found id: ""
	I0229 18:39:50.975544   45244 logs.go:276] 0 containers: []
	W0229 18:39:50.975552   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:39:50.975561   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:39:50.975572   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:39:51.026813   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:39:51.026846   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:39:51.041195   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:39:51.041221   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:39:51.134468   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:39:51.134491   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:39:51.134515   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:39:51.172978   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:39:51.173011   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:39:53.728158   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:53.744587   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:39:53.744664   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:39:53.783457   45244 cri.go:89] found id: ""
	I0229 18:39:53.783483   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.783491   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:39:53.783496   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:39:53.783549   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:39:53.821552   45244 cri.go:89] found id: ""
	I0229 18:39:53.821584   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.821595   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:39:53.821602   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:39:53.821675   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:39:53.862752   45244 cri.go:89] found id: ""
	I0229 18:39:53.862776   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.862783   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:39:53.862805   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:39:53.862862   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:39:53.904483   45244 cri.go:89] found id: ""
	I0229 18:39:53.904513   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.904524   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:39:53.904531   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:39:53.904592   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:39:53.943293   45244 cri.go:89] found id: ""
	I0229 18:39:53.943324   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.943335   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:39:53.943342   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:39:53.943402   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:39:53.983971   45244 cri.go:89] found id: ""
	I0229 18:39:53.983995   45244 logs.go:276] 0 containers: []
	W0229 18:39:53.984003   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:39:53.984009   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:39:53.984086   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:39:54.023757   45244 cri.go:89] found id: ""
	I0229 18:39:54.023780   45244 logs.go:276] 0 containers: []
	W0229 18:39:54.023788   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:39:54.023793   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:39:54.023848   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:39:54.062571   45244 cri.go:89] found id: ""
	I0229 18:39:54.062593   45244 logs.go:276] 0 containers: []
	W0229 18:39:54.062600   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:39:54.062607   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:39:54.062618   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:39:54.091081   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:39:54.091121   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:39:54.196179   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:39:54.196199   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:39:54.196214   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:39:54.231238   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:39:54.231268   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:39:54.274645   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:39:54.274670   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:39:56.824869   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:56.839416   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:39:56.839485   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:39:56.877915   45244 cri.go:89] found id: ""
	I0229 18:39:56.877948   45244 logs.go:276] 0 containers: []
	W0229 18:39:56.877956   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:39:56.877961   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:39:56.878029   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:39:56.920272   45244 cri.go:89] found id: ""
	I0229 18:39:56.920303   45244 logs.go:276] 0 containers: []
	W0229 18:39:56.920313   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:39:56.920320   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:39:56.920393   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:39:56.956720   45244 cri.go:89] found id: ""
	I0229 18:39:56.956744   45244 logs.go:276] 0 containers: []
	W0229 18:39:56.956751   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:39:56.956757   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:39:56.956803   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:39:56.996225   45244 cri.go:89] found id: ""
	I0229 18:39:56.996285   45244 logs.go:276] 0 containers: []
	W0229 18:39:56.996298   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:39:56.996308   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:39:56.996370   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:39:57.034888   45244 cri.go:89] found id: ""
	I0229 18:39:57.034913   45244 logs.go:276] 0 containers: []
	W0229 18:39:57.034920   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:39:57.034926   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:39:57.034980   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:39:57.073920   45244 cri.go:89] found id: ""
	I0229 18:39:57.073948   45244 logs.go:276] 0 containers: []
	W0229 18:39:57.073956   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:39:57.073961   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:39:57.074007   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:39:57.127168   45244 cri.go:89] found id: ""
	I0229 18:39:57.127199   45244 logs.go:276] 0 containers: []
	W0229 18:39:57.127207   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:39:57.127213   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:39:57.127288   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:39:57.185799   45244 cri.go:89] found id: ""
	I0229 18:39:57.185839   45244 logs.go:276] 0 containers: []
	W0229 18:39:57.185851   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:39:57.185862   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:39:57.185879   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:39:57.240033   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:39:57.240058   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:39:57.287526   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:39:57.287561   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:39:57.303263   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:39:57.303296   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:39:57.380228   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:39:57.380248   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:39:57.380265   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:39:59.917320   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:39:59.933057   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:39:59.933139   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:39:59.974641   45244 cri.go:89] found id: ""
	I0229 18:39:59.974675   45244 logs.go:276] 0 containers: []
	W0229 18:39:59.974697   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:39:59.974714   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:39:59.974794   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:00.016657   45244 cri.go:89] found id: ""
	I0229 18:40:00.016687   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.016699   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:00.016706   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:00.016767   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:00.055083   45244 cri.go:89] found id: ""
	I0229 18:40:00.055108   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.055115   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:00.055120   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:00.055175   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:00.103863   45244 cri.go:89] found id: ""
	I0229 18:40:00.103894   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.103906   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:00.103913   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:00.103985   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:00.145086   45244 cri.go:89] found id: ""
	I0229 18:40:00.145115   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.145126   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:00.145134   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:00.145203   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:00.208435   45244 cri.go:89] found id: ""
	I0229 18:40:00.208465   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.208476   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:00.208483   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:00.208569   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:00.251564   45244 cri.go:89] found id: ""
	I0229 18:40:00.251591   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.251599   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:00.251605   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:00.251672   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:00.290105   45244 cri.go:89] found id: ""
	I0229 18:40:00.290130   45244 logs.go:276] 0 containers: []
	W0229 18:40:00.290151   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:00.290160   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:00.290188   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:00.338868   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:00.338902   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:00.353118   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:00.353143   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:00.429727   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:00.429755   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:00.429772   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:00.463576   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:00.463605   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:03.007621   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:03.022832   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:03.022922   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:03.062631   45244 cri.go:89] found id: ""
	I0229 18:40:03.062666   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.062677   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:03.062685   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:03.062749   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:03.100401   45244 cri.go:89] found id: ""
	I0229 18:40:03.100429   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.100437   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:03.100442   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:03.100502   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:03.141963   45244 cri.go:89] found id: ""
	I0229 18:40:03.141993   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.142003   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:03.142010   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:03.142072   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:03.198729   45244 cri.go:89] found id: ""
	I0229 18:40:03.198758   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.198765   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:03.198770   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:03.198839   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:03.235328   45244 cri.go:89] found id: ""
	I0229 18:40:03.235363   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.235373   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:03.235379   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:03.235426   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:03.283070   45244 cri.go:89] found id: ""
	I0229 18:40:03.283094   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.283105   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:03.283111   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:03.283169   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:03.327429   45244 cri.go:89] found id: ""
	I0229 18:40:03.327454   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.327465   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:03.327472   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:03.327530   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:03.369861   45244 cri.go:89] found id: ""
	I0229 18:40:03.369892   45244 logs.go:276] 0 containers: []
	W0229 18:40:03.369903   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:03.369913   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:03.369928   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:03.422364   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:03.422403   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:03.436717   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:03.436738   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:03.504514   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:03.504535   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:03.504547   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:03.538148   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:03.538176   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:06.082860   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:06.099337   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:06.099397   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:06.146370   45244 cri.go:89] found id: ""
	I0229 18:40:06.146400   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.146411   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:06.146420   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:06.146481   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:06.201117   45244 cri.go:89] found id: ""
	I0229 18:40:06.201149   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.201157   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:06.201162   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:06.201211   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:06.242987   45244 cri.go:89] found id: ""
	I0229 18:40:06.243013   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.243023   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:06.243030   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:06.243092   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:06.281934   45244 cri.go:89] found id: ""
	I0229 18:40:06.281962   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.281972   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:06.281979   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:06.282038   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:06.320336   45244 cri.go:89] found id: ""
	I0229 18:40:06.320368   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.320379   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:06.320386   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:06.320436   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:06.356417   45244 cri.go:89] found id: ""
	I0229 18:40:06.356449   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.356467   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:06.356474   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:06.356524   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:06.395111   45244 cri.go:89] found id: ""
	I0229 18:40:06.395148   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.395158   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:06.395163   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:06.395225   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:06.433340   45244 cri.go:89] found id: ""
	I0229 18:40:06.433365   45244 logs.go:276] 0 containers: []
	W0229 18:40:06.433372   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:06.433381   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:06.433417   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:06.467483   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:06.467512   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:06.513631   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:06.513666   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:06.564822   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:06.564851   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:06.579328   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:06.579351   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:06.649673   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:09.150635   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:09.170114   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:09.170218   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:09.222643   45244 cri.go:89] found id: ""
	I0229 18:40:09.222675   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.222686   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:09.222691   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:09.222740   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:09.274085   45244 cri.go:89] found id: ""
	I0229 18:40:09.274112   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.274122   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:09.274129   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:09.274181   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:09.318117   45244 cri.go:89] found id: ""
	I0229 18:40:09.318144   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.318154   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:09.318162   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:09.318253   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:09.356331   45244 cri.go:89] found id: ""
	I0229 18:40:09.356361   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.356372   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:09.356379   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:09.356441   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:09.395464   45244 cri.go:89] found id: ""
	I0229 18:40:09.395489   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.395498   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:09.395504   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:09.395566   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:09.431753   45244 cri.go:89] found id: ""
	I0229 18:40:09.431787   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.431796   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:09.431803   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:09.431862   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:09.477251   45244 cri.go:89] found id: ""
	I0229 18:40:09.477275   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.477283   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:09.477290   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:09.477358   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:09.516706   45244 cri.go:89] found id: ""
	I0229 18:40:09.516729   45244 logs.go:276] 0 containers: []
	W0229 18:40:09.516737   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:09.516744   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:09.516757   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:09.591072   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:09.591095   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:09.591112   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:09.625367   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:09.625401   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:09.668718   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:09.668748   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:09.717391   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:09.717423   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:12.232665   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:12.246955   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:12.247033   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:12.290259   45244 cri.go:89] found id: ""
	I0229 18:40:12.290286   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.290298   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:12.290305   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:12.290357   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:12.333852   45244 cri.go:89] found id: ""
	I0229 18:40:12.333884   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.333891   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:12.333899   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:12.333952   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:12.374269   45244 cri.go:89] found id: ""
	I0229 18:40:12.374301   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.374308   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:12.374313   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:12.374373   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:12.412383   45244 cri.go:89] found id: ""
	I0229 18:40:12.412418   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.412430   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:12.412437   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:12.412494   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:12.460462   45244 cri.go:89] found id: ""
	I0229 18:40:12.460492   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.460503   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:12.460510   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:12.460570   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:12.500109   45244 cri.go:89] found id: ""
	I0229 18:40:12.500145   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.500157   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:12.500164   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:12.500223   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:12.539298   45244 cri.go:89] found id: ""
	I0229 18:40:12.539324   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.539334   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:12.539342   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:12.539405   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:12.579502   45244 cri.go:89] found id: ""
	I0229 18:40:12.579549   45244 logs.go:276] 0 containers: []
	W0229 18:40:12.579561   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:12.579572   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:12.579594   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:12.630720   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:12.630750   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:12.646161   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:12.646191   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:12.725040   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:12.725059   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:12.725072   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:12.760594   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:12.760639   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:15.302384   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:15.316856   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:15.316914   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:15.353898   45244 cri.go:89] found id: ""
	I0229 18:40:15.353923   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.353931   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:15.353936   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:15.353990   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:15.389304   45244 cri.go:89] found id: ""
	I0229 18:40:15.389339   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.389350   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:15.389357   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:15.389418   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:15.426031   45244 cri.go:89] found id: ""
	I0229 18:40:15.426056   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.426064   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:15.426070   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:15.426124   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:15.465867   45244 cri.go:89] found id: ""
	I0229 18:40:15.465896   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.465904   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:15.465915   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:15.465961   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:15.502218   45244 cri.go:89] found id: ""
	I0229 18:40:15.502246   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.502257   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:15.502264   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:15.502324   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:15.538294   45244 cri.go:89] found id: ""
	I0229 18:40:15.538320   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.538327   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:15.538332   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:15.538388   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:15.576438   45244 cri.go:89] found id: ""
	I0229 18:40:15.576461   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.576469   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:15.576475   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:15.576562   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:15.612434   45244 cri.go:89] found id: ""
	I0229 18:40:15.612456   45244 logs.go:276] 0 containers: []
	W0229 18:40:15.612463   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:15.612479   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:15.612490   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:15.655779   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:15.655811   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:15.706382   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:15.706417   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:15.722023   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:15.722052   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:15.794364   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:15.794407   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:15.794427   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:18.328609   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:18.343518   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:18.343586   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:18.388039   45244 cri.go:89] found id: ""
	I0229 18:40:18.388063   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.388070   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:18.388075   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:18.388118   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:18.433835   45244 cri.go:89] found id: ""
	I0229 18:40:18.433873   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.433882   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:18.433888   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:18.433943   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:18.476031   45244 cri.go:89] found id: ""
	I0229 18:40:18.476059   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.476066   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:18.476076   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:18.476147   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:18.520864   45244 cri.go:89] found id: ""
	I0229 18:40:18.520888   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.520896   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:18.520902   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:18.520949   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:18.560080   45244 cri.go:89] found id: ""
	I0229 18:40:18.560107   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.560118   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:18.560125   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:18.560195   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:18.595905   45244 cri.go:89] found id: ""
	I0229 18:40:18.595931   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.595960   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:18.595967   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:18.596033   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:18.639685   45244 cri.go:89] found id: ""
	I0229 18:40:18.639708   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.639718   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:18.639725   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:18.639784   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:18.681386   45244 cri.go:89] found id: ""
	I0229 18:40:18.681414   45244 logs.go:276] 0 containers: []
	W0229 18:40:18.681426   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:18.681437   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:18.681451   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:18.732386   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:18.732419   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:18.748156   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:18.748186   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:18.830577   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:18.830605   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:18.830635   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:18.876921   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:18.876959   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:21.443707   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:21.458885   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:21.458962   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:21.504344   45244 cri.go:89] found id: ""
	I0229 18:40:21.504381   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.504395   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:21.504405   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:21.504498   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:21.548754   45244 cri.go:89] found id: ""
	I0229 18:40:21.548786   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.548796   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:21.548803   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:21.548885   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:21.597403   45244 cri.go:89] found id: ""
	I0229 18:40:21.597433   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.597444   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:21.597450   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:21.597509   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:21.638082   45244 cri.go:89] found id: ""
	I0229 18:40:21.638113   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.638133   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:21.638141   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:21.638210   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:21.681379   45244 cri.go:89] found id: ""
	I0229 18:40:21.681403   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.681411   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:21.681416   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:21.681484   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:21.721904   45244 cri.go:89] found id: ""
	I0229 18:40:21.721931   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.721942   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:21.721949   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:21.722006   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:21.768370   45244 cri.go:89] found id: ""
	I0229 18:40:21.768397   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.768407   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:21.768414   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:21.768502   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:21.812278   45244 cri.go:89] found id: ""
	I0229 18:40:21.812308   45244 logs.go:276] 0 containers: []
	W0229 18:40:21.812319   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:21.812330   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:21.812346   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:21.864393   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:21.864427   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:21.889613   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:21.889645   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:22.022591   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:22.022617   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:22.022634   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:22.058253   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:22.058286   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:24.600183   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:24.615416   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:24.615492   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:24.653505   45244 cri.go:89] found id: ""
	I0229 18:40:24.653528   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.653538   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:24.653545   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:24.653594   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:24.696780   45244 cri.go:89] found id: ""
	I0229 18:40:24.696808   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.696819   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:24.696826   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:24.696905   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:24.745408   45244 cri.go:89] found id: ""
	I0229 18:40:24.745438   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.745447   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:24.745453   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:24.745523   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:24.787596   45244 cri.go:89] found id: ""
	I0229 18:40:24.787622   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.787633   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:24.787639   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:24.787707   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:24.831832   45244 cri.go:89] found id: ""
	I0229 18:40:24.831860   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.831871   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:24.831878   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:24.831940   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:24.901755   45244 cri.go:89] found id: ""
	I0229 18:40:24.901785   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.901796   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:24.901804   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:24.901862   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:24.961604   45244 cri.go:89] found id: ""
	I0229 18:40:24.961633   45244 logs.go:276] 0 containers: []
	W0229 18:40:24.961644   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:24.961650   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:24.961718   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:25.009281   45244 cri.go:89] found id: ""
	I0229 18:40:25.009309   45244 logs.go:276] 0 containers: []
	W0229 18:40:25.009321   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:25.009332   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:25.009360   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:25.087916   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:25.087940   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:25.087956   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:25.133146   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:25.133184   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:25.179019   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:25.179042   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:25.231522   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:25.231553   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:27.749128   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:27.766773   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:27.766846   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:27.821292   45244 cri.go:89] found id: ""
	I0229 18:40:27.821318   45244 logs.go:276] 0 containers: []
	W0229 18:40:27.821328   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:27.821336   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:27.821393   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:27.899489   45244 cri.go:89] found id: ""
	I0229 18:40:27.899522   45244 logs.go:276] 0 containers: []
	W0229 18:40:27.899535   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:27.899542   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:27.899623   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:27.950472   45244 cri.go:89] found id: ""
	I0229 18:40:27.950512   45244 logs.go:276] 0 containers: []
	W0229 18:40:27.950524   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:27.950532   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:27.950600   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:28.001298   45244 cri.go:89] found id: ""
	I0229 18:40:28.001322   45244 logs.go:276] 0 containers: []
	W0229 18:40:28.001334   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:28.001342   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:28.001407   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:28.040494   45244 cri.go:89] found id: ""
	I0229 18:40:28.040527   45244 logs.go:276] 0 containers: []
	W0229 18:40:28.040538   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:28.040548   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:28.040620   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:28.080653   45244 cri.go:89] found id: ""
	I0229 18:40:28.080685   45244 logs.go:276] 0 containers: []
	W0229 18:40:28.080696   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:28.080704   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:28.080773   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:28.142109   45244 cri.go:89] found id: ""
	I0229 18:40:28.142136   45244 logs.go:276] 0 containers: []
	W0229 18:40:28.142146   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:28.142154   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:28.142214   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:28.187635   45244 cri.go:89] found id: ""
	I0229 18:40:28.187668   45244 logs.go:276] 0 containers: []
	W0229 18:40:28.187679   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:28.187691   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:28.187709   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:28.203556   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:28.203582   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:28.297180   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:28.297211   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:28.297228   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:28.341872   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:28.341915   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:28.386310   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:28.386342   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:30.947273   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:30.964601   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:30.964674   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:31.010927   45244 cri.go:89] found id: ""
	I0229 18:40:31.010954   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.010964   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:31.010985   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:31.011046   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:31.051286   45244 cri.go:89] found id: ""
	I0229 18:40:31.051312   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.051322   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:31.051330   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:31.051388   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:31.096923   45244 cri.go:89] found id: ""
	I0229 18:40:31.096952   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.096964   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:31.096972   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:31.097035   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:31.143262   45244 cri.go:89] found id: ""
	I0229 18:40:31.143289   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.143299   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:31.143316   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:31.143383   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:31.197165   45244 cri.go:89] found id: ""
	I0229 18:40:31.197195   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.197205   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:31.197213   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:31.197302   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:31.248072   45244 cri.go:89] found id: ""
	I0229 18:40:31.248095   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.248103   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:31.248108   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:31.248172   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:31.299639   45244 cri.go:89] found id: ""
	I0229 18:40:31.299699   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.299713   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:31.299719   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:31.299774   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:31.342604   45244 cri.go:89] found id: ""
	I0229 18:40:31.342646   45244 logs.go:276] 0 containers: []
	W0229 18:40:31.342657   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:31.342668   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:31.342682   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:31.397240   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:31.397273   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:31.413975   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:31.414012   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:31.485207   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:31.485229   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:31.485242   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:31.521454   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:31.521484   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:34.068671   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:34.084416   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:34.084494   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:34.130715   45244 cri.go:89] found id: ""
	I0229 18:40:34.130738   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.130746   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:34.130751   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:34.130803   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:34.170494   45244 cri.go:89] found id: ""
	I0229 18:40:34.170527   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.170538   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:34.170555   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:34.170626   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:34.213727   45244 cri.go:89] found id: ""
	I0229 18:40:34.213749   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.213770   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:34.213777   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:34.213840   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:34.255812   45244 cri.go:89] found id: ""
	I0229 18:40:34.255841   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.255852   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:34.255860   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:34.255919   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:34.293237   45244 cri.go:89] found id: ""
	I0229 18:40:34.293265   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.293272   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:34.293278   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:34.293326   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:34.336330   45244 cri.go:89] found id: ""
	I0229 18:40:34.336353   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.336361   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:34.336366   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:34.336413   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:34.373649   45244 cri.go:89] found id: ""
	I0229 18:40:34.373673   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.373684   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:34.373692   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:34.373750   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:34.409123   45244 cri.go:89] found id: ""
	I0229 18:40:34.409148   45244 logs.go:276] 0 containers: []
	W0229 18:40:34.409155   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:34.409163   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:34.409175   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:34.443599   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:34.443627   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:34.484991   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:34.485024   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:34.534658   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:34.534686   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:34.550138   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:34.550162   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:34.623065   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:37.123693   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:37.140443   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:37.140512   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:37.178423   45244 cri.go:89] found id: ""
	I0229 18:40:37.178449   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.178460   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:37.178468   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:37.178524   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:37.221962   45244 cri.go:89] found id: ""
	I0229 18:40:37.221985   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.221992   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:37.221997   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:37.222043   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:37.261896   45244 cri.go:89] found id: ""
	I0229 18:40:37.261918   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.261927   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:37.261934   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:37.261983   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:37.308419   45244 cri.go:89] found id: ""
	I0229 18:40:37.308449   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.308457   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:37.308463   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:37.308519   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:37.345606   45244 cri.go:89] found id: ""
	I0229 18:40:37.345631   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.345639   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:37.345644   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:37.345692   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:37.388208   45244 cri.go:89] found id: ""
	I0229 18:40:37.388236   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.388247   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:37.388255   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:37.388308   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:37.434035   45244 cri.go:89] found id: ""
	I0229 18:40:37.434064   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.434072   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:37.434078   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:37.434143   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:37.477124   45244 cri.go:89] found id: ""
	I0229 18:40:37.477150   45244 logs.go:276] 0 containers: []
	W0229 18:40:37.477161   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:37.477177   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:37.477193   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:37.511519   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:37.511555   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:37.555008   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:37.555037   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:37.616837   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:37.616866   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:37.636093   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:37.636118   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:37.720207   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:40.221081   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:40.237168   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:40.237237   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:40.284394   45244 cri.go:89] found id: ""
	I0229 18:40:40.284424   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.284432   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:40.284438   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:40.284491   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:40.325690   45244 cri.go:89] found id: ""
	I0229 18:40:40.325718   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.325729   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:40.325736   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:40.325796   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:40.365544   45244 cri.go:89] found id: ""
	I0229 18:40:40.365573   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.365582   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:40.365589   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:40.365658   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:40.405404   45244 cri.go:89] found id: ""
	I0229 18:40:40.405430   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.405445   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:40.405453   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:40.405518   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:40.453944   45244 cri.go:89] found id: ""
	I0229 18:40:40.453973   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.453985   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:40.453993   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:40.454054   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:40.493881   45244 cri.go:89] found id: ""
	I0229 18:40:40.493910   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.493921   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:40.493928   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:40.493986   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:40.535629   45244 cri.go:89] found id: ""
	I0229 18:40:40.535658   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.535669   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:40.535676   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:40.535735   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:40.578281   45244 cri.go:89] found id: ""
	I0229 18:40:40.578314   45244 logs.go:276] 0 containers: []
	W0229 18:40:40.578322   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:40.578330   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:40.578343   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:40.603350   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:40.603389   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:40.710241   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:40.710266   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:40.710282   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:40.748686   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:40.748718   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:40.801284   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:40.801325   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:43.364937   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:43.379923   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:43.380000   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:43.418974   45244 cri.go:89] found id: ""
	I0229 18:40:43.419001   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.419012   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:43.419019   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:43.419077   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:43.457129   45244 cri.go:89] found id: ""
	I0229 18:40:43.457155   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.457166   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:43.457174   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:43.457241   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:43.497496   45244 cri.go:89] found id: ""
	I0229 18:40:43.497525   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.497533   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:43.497539   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:43.497589   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:43.533166   45244 cri.go:89] found id: ""
	I0229 18:40:43.533191   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.533199   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:43.533204   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:43.533253   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:43.577627   45244 cri.go:89] found id: ""
	I0229 18:40:43.577648   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.577655   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:43.577660   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:43.577719   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:43.629763   45244 cri.go:89] found id: ""
	I0229 18:40:43.629803   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.629815   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:43.629822   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:43.629887   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:43.674158   45244 cri.go:89] found id: ""
	I0229 18:40:43.674178   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.674186   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:43.674192   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:43.674237   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:43.728226   45244 cri.go:89] found id: ""
	I0229 18:40:43.728256   45244 logs.go:276] 0 containers: []
	W0229 18:40:43.728267   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:43.728276   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:43.728292   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:43.743716   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:43.743742   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:43.821618   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:43.821635   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:43.821649   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:43.857409   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:43.857440   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:43.905458   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:43.905483   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:46.456826   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:46.471701   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:46.471771   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:46.511796   45244 cri.go:89] found id: ""
	I0229 18:40:46.511829   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.511842   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:46.511850   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:46.511912   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:46.553894   45244 cri.go:89] found id: ""
	I0229 18:40:46.553923   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.553933   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:46.553939   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:46.553986   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:46.612004   45244 cri.go:89] found id: ""
	I0229 18:40:46.612031   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.612042   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:46.612049   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:46.612107   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:46.691424   45244 cri.go:89] found id: ""
	I0229 18:40:46.691446   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.691454   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:46.691459   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:46.691511   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:46.728473   45244 cri.go:89] found id: ""
	I0229 18:40:46.728503   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.728512   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:46.728520   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:46.728582   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:46.767047   45244 cri.go:89] found id: ""
	I0229 18:40:46.767071   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.767081   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:46.767088   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:46.767144   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:46.813143   45244 cri.go:89] found id: ""
	I0229 18:40:46.813168   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.813179   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:46.813186   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:46.813251   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:46.855012   45244 cri.go:89] found id: ""
	I0229 18:40:46.855039   45244 logs.go:276] 0 containers: []
	W0229 18:40:46.855054   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:46.855064   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:46.855079   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:46.902533   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:46.902568   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:46.917456   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:46.917483   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:46.989238   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:46.989267   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:46.989281   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:47.023557   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:47.023592   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:49.578247   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:49.599811   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:49.599871   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:49.657464   45244 cri.go:89] found id: ""
	I0229 18:40:49.657496   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.657505   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:49.657510   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:49.657567   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:49.712030   45244 cri.go:89] found id: ""
	I0229 18:40:49.712068   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.712076   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:49.712081   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:49.712136   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:49.755033   45244 cri.go:89] found id: ""
	I0229 18:40:49.755059   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.755068   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:49.755076   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:49.755134   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:49.795219   45244 cri.go:89] found id: ""
	I0229 18:40:49.795243   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.795251   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:49.795257   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:49.795317   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:49.833071   45244 cri.go:89] found id: ""
	I0229 18:40:49.833115   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.833123   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:49.833130   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:49.833192   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:49.871607   45244 cri.go:89] found id: ""
	I0229 18:40:49.871640   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.871654   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:49.871661   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:49.871723   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:49.909284   45244 cri.go:89] found id: ""
	I0229 18:40:49.909305   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.909313   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:49.909319   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:49.909380   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:49.950772   45244 cri.go:89] found id: ""
	I0229 18:40:49.950794   45244 logs.go:276] 0 containers: []
	W0229 18:40:49.950802   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:49.950810   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:49.950825   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:49.999319   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:49.999353   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:50.016098   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:50.016132   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:50.093182   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:50.093207   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:50.093226   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:50.127839   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:50.127872   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:52.670916   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:52.688539   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:52.688613   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:52.742213   45244 cri.go:89] found id: ""
	I0229 18:40:52.742240   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.742252   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:52.742259   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:52.742320   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:52.782414   45244 cri.go:89] found id: ""
	I0229 18:40:52.782440   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.782451   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:52.782458   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:52.782517   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:52.825497   45244 cri.go:89] found id: ""
	I0229 18:40:52.825529   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.825541   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:52.825549   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:52.825620   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:52.872250   45244 cri.go:89] found id: ""
	I0229 18:40:52.872273   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.872281   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:52.872287   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:52.872343   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:52.912986   45244 cri.go:89] found id: ""
	I0229 18:40:52.913013   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.913021   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:52.913027   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:52.913086   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:52.956524   45244 cri.go:89] found id: ""
	I0229 18:40:52.956550   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.956558   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:52.956564   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:52.956611   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:52.993709   45244 cri.go:89] found id: ""
	I0229 18:40:52.993746   45244 logs.go:276] 0 containers: []
	W0229 18:40:52.993757   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:52.993764   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:52.993827   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:53.034082   45244 cri.go:89] found id: ""
	I0229 18:40:53.034111   45244 logs.go:276] 0 containers: []
	W0229 18:40:53.034127   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:53.034138   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:53.034152   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:53.087430   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:53.087470   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:53.106885   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:53.106917   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:53.196150   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:53.196176   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:53.196197   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:53.234130   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:53.234159   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:55.782626   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:55.797462   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:55.797524   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:55.836895   45244 cri.go:89] found id: ""
	I0229 18:40:55.836929   45244 logs.go:276] 0 containers: []
	W0229 18:40:55.836942   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:55.836950   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:55.837018   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:55.876948   45244 cri.go:89] found id: ""
	I0229 18:40:55.876976   45244 logs.go:276] 0 containers: []
	W0229 18:40:55.876988   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:55.876995   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:55.877060   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:55.918168   45244 cri.go:89] found id: ""
	I0229 18:40:55.918203   45244 logs.go:276] 0 containers: []
	W0229 18:40:55.918215   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:55.918222   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:55.918294   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:55.957539   45244 cri.go:89] found id: ""
	I0229 18:40:55.957573   45244 logs.go:276] 0 containers: []
	W0229 18:40:55.957585   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:55.957592   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:55.957657   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:55.994500   45244 cri.go:89] found id: ""
	I0229 18:40:55.994526   45244 logs.go:276] 0 containers: []
	W0229 18:40:55.994534   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:55.994540   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:55.994613   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:56.035524   45244 cri.go:89] found id: ""
	I0229 18:40:56.035554   45244 logs.go:276] 0 containers: []
	W0229 18:40:56.035564   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:56.035571   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:56.035632   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:56.079798   45244 cri.go:89] found id: ""
	I0229 18:40:56.079824   45244 logs.go:276] 0 containers: []
	W0229 18:40:56.079835   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:56.079842   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:56.079911   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:56.116936   45244 cri.go:89] found id: ""
	I0229 18:40:56.116967   45244 logs.go:276] 0 containers: []
	W0229 18:40:56.116978   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:56.116989   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:56.117002   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:56.133891   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:56.133927   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:56.217743   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:56.217764   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:56.217779   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:56.257566   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:56.257596   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:40:56.309256   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:56.309288   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:58.861879   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:58.877596   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:40:58.877671   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:40:58.924363   45244 cri.go:89] found id: ""
	I0229 18:40:58.924396   45244 logs.go:276] 0 containers: []
	W0229 18:40:58.924408   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:40:58.924415   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:40:58.924479   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:40:58.965892   45244 cri.go:89] found id: ""
	I0229 18:40:58.965919   45244 logs.go:276] 0 containers: []
	W0229 18:40:58.965927   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:40:58.965933   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:40:58.966000   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:40:59.006350   45244 cri.go:89] found id: ""
	I0229 18:40:59.006379   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.006390   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:40:59.006397   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:40:59.006463   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:40:59.055586   45244 cri.go:89] found id: ""
	I0229 18:40:59.055614   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.055624   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:40:59.055633   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:40:59.055694   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:40:59.101686   45244 cri.go:89] found id: ""
	I0229 18:40:59.101712   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.101720   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:40:59.101726   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:40:59.101784   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:40:59.150310   45244 cri.go:89] found id: ""
	I0229 18:40:59.150332   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.150339   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:40:59.150345   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:40:59.150427   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:40:59.199862   45244 cri.go:89] found id: ""
	I0229 18:40:59.199898   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.199906   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:40:59.199911   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:40:59.199995   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:40:59.240204   45244 cri.go:89] found id: ""
	I0229 18:40:59.240231   45244 logs.go:276] 0 containers: []
	W0229 18:40:59.240241   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:40:59.240255   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:40:59.240272   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:40:59.297855   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:40:59.297891   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:40:59.315785   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:40:59.315811   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:40:59.416745   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:40:59.416771   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:40:59.416786   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:40:59.455126   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:40:59.455156   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:02.011971   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:02.028691   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:02.028763   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:02.078239   45244 cri.go:89] found id: ""
	I0229 18:41:02.078280   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.078291   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:02.078299   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:02.078365   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:02.119875   45244 cri.go:89] found id: ""
	I0229 18:41:02.119902   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.119913   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:02.119920   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:02.119979   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:02.169979   45244 cri.go:89] found id: ""
	I0229 18:41:02.170011   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.170023   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:02.170029   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:02.170092   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:02.208841   45244 cri.go:89] found id: ""
	I0229 18:41:02.208873   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.208885   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:02.208892   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:02.208954   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:02.255753   45244 cri.go:89] found id: ""
	I0229 18:41:02.255781   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.255793   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:02.255800   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:02.255860   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:02.297315   45244 cri.go:89] found id: ""
	I0229 18:41:02.297342   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.297352   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:02.297360   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:02.297419   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:02.336965   45244 cri.go:89] found id: ""
	I0229 18:41:02.337004   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.337016   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:02.337023   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:02.337086   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:02.391821   45244 cri.go:89] found id: ""
	I0229 18:41:02.391854   45244 logs.go:276] 0 containers: []
	W0229 18:41:02.391866   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:02.391877   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:02.391892   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:02.464681   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:02.464725   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:02.483484   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:02.483513   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:02.560907   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:02.560929   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:02.560943   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:02.597342   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:02.597371   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:05.140612   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:05.157722   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:05.157780   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:05.199873   45244 cri.go:89] found id: ""
	I0229 18:41:05.199896   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.199904   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:05.199910   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:05.199957   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:05.248763   45244 cri.go:89] found id: ""
	I0229 18:41:05.248790   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.248798   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:05.248803   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:05.248859   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:05.290266   45244 cri.go:89] found id: ""
	I0229 18:41:05.290294   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.290302   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:05.290308   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:05.290366   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:05.337242   45244 cri.go:89] found id: ""
	I0229 18:41:05.337268   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.337279   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:05.337286   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:05.337358   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:05.414646   45244 cri.go:89] found id: ""
	I0229 18:41:05.414676   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.414686   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:05.414692   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:05.414753   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:05.466400   45244 cri.go:89] found id: ""
	I0229 18:41:05.466429   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.466440   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:05.466447   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:05.466510   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:05.504344   45244 cri.go:89] found id: ""
	I0229 18:41:05.504371   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.504381   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:05.504389   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:05.504451   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:05.558845   45244 cri.go:89] found id: ""
	I0229 18:41:05.558872   45244 logs.go:276] 0 containers: []
	W0229 18:41:05.558880   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:05.558890   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:05.558905   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:05.576381   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:05.576414   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:05.659820   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:05.659839   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:05.659849   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:05.698861   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:05.698893   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:05.746337   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:05.746365   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:08.300529   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:08.319232   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:08.319301   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:08.392975   45244 cri.go:89] found id: ""
	I0229 18:41:08.393003   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.393013   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:08.393021   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:08.393083   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:08.439863   45244 cri.go:89] found id: ""
	I0229 18:41:08.439890   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.439898   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:08.439904   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:08.439964   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:08.477508   45244 cri.go:89] found id: ""
	I0229 18:41:08.477536   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.477547   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:08.477555   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:08.477620   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:08.513542   45244 cri.go:89] found id: ""
	I0229 18:41:08.513566   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.513574   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:08.513580   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:08.513635   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:08.559830   45244 cri.go:89] found id: ""
	I0229 18:41:08.559854   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.559864   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:08.559872   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:08.559928   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:08.597183   45244 cri.go:89] found id: ""
	I0229 18:41:08.597207   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.597217   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:08.597224   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:08.597291   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:08.635479   45244 cri.go:89] found id: ""
	I0229 18:41:08.635511   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.635522   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:08.635529   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:08.635587   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:08.676770   45244 cri.go:89] found id: ""
	I0229 18:41:08.676795   45244 logs.go:276] 0 containers: []
	W0229 18:41:08.676806   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:08.676817   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:08.676836   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:08.725359   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:08.725388   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:08.741464   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:08.741500   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:08.828382   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:08.828404   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:08.828416   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:08.865658   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:08.865692   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:11.417543   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:11.435376   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:11.435449   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:11.485327   45244 cri.go:89] found id: ""
	I0229 18:41:11.485359   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.485373   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:11.485386   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:11.485443   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:11.521310   45244 cri.go:89] found id: ""
	I0229 18:41:11.521337   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.521350   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:11.521357   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:11.521424   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:11.558583   45244 cri.go:89] found id: ""
	I0229 18:41:11.558607   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.558614   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:11.558620   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:11.558680   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:11.597255   45244 cri.go:89] found id: ""
	I0229 18:41:11.597286   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.597297   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:11.597312   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:11.597375   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:11.639867   45244 cri.go:89] found id: ""
	I0229 18:41:11.639897   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.639908   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:11.639916   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:11.639975   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:11.682370   45244 cri.go:89] found id: ""
	I0229 18:41:11.682400   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.682411   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:11.682419   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:11.682478   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:11.723882   45244 cri.go:89] found id: ""
	I0229 18:41:11.723909   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.723919   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:11.723927   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:11.724008   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:11.770566   45244 cri.go:89] found id: ""
	I0229 18:41:11.770590   45244 logs.go:276] 0 containers: []
	W0229 18:41:11.770599   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:11.770611   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:11.770624   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:11.819318   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:11.819344   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:11.834628   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:11.834655   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:11.912107   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:11.912131   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:11.912147   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:11.947210   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:11.947239   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:14.494090   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:14.510244   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:14.510317   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:14.553382   45244 cri.go:89] found id: ""
	I0229 18:41:14.553402   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.553410   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:14.553415   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:14.553474   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:14.596254   45244 cri.go:89] found id: ""
	I0229 18:41:14.596287   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.596298   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:14.596305   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:14.596373   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:14.634173   45244 cri.go:89] found id: ""
	I0229 18:41:14.634200   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.634207   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:14.634213   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:14.634262   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:14.678036   45244 cri.go:89] found id: ""
	I0229 18:41:14.678066   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.678078   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:14.678085   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:14.678140   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:14.722864   45244 cri.go:89] found id: ""
	I0229 18:41:14.722890   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.722899   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:14.722905   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:14.722968   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:14.763505   45244 cri.go:89] found id: ""
	I0229 18:41:14.763533   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.763543   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:14.763551   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:14.763611   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:14.808543   45244 cri.go:89] found id: ""
	I0229 18:41:14.808570   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.808581   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:14.808587   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:14.808675   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:14.853044   45244 cri.go:89] found id: ""
	I0229 18:41:14.853069   45244 logs.go:276] 0 containers: []
	W0229 18:41:14.853077   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:14.853084   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:14.853095   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:14.903626   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:14.903665   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:14.921997   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:14.922026   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:15.010010   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:15.010038   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:15.010055   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:15.047288   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:15.047315   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:17.593264   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:17.611161   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:17.611239   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:17.659137   45244 cri.go:89] found id: ""
	I0229 18:41:17.659161   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.659175   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:17.659180   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:17.659238   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:17.702888   45244 cri.go:89] found id: ""
	I0229 18:41:17.702914   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.702922   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:17.702928   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:17.702991   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:17.744962   45244 cri.go:89] found id: ""
	I0229 18:41:17.744995   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.745006   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:17.745014   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:17.745076   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:17.784218   45244 cri.go:89] found id: ""
	I0229 18:41:17.784250   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.784259   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:17.784264   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:17.784321   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:17.827840   45244 cri.go:89] found id: ""
	I0229 18:41:17.827867   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.827878   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:17.827885   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:17.827952   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:17.875017   45244 cri.go:89] found id: ""
	I0229 18:41:17.875044   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.875055   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:17.875062   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:17.875119   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:17.912688   45244 cri.go:89] found id: ""
	I0229 18:41:17.912715   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.912722   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:17.912727   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:17.912781   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:17.950277   45244 cri.go:89] found id: ""
	I0229 18:41:17.950307   45244 logs.go:276] 0 containers: []
	W0229 18:41:17.950321   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:17.950332   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:17.950346   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:17.965411   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:17.965439   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:18.043030   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:18.043055   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:18.043073   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:18.079150   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:18.079191   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:18.134782   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:18.134813   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:20.712792   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:20.726708   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:20.726776   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:20.766531   45244 cri.go:89] found id: ""
	I0229 18:41:20.766578   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.766589   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:20.766596   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:20.766665   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:20.812263   45244 cri.go:89] found id: ""
	I0229 18:41:20.812285   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.812293   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:20.812298   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:20.812354   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:20.858973   45244 cri.go:89] found id: ""
	I0229 18:41:20.859001   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.859009   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:20.859015   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:20.859071   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:20.899486   45244 cri.go:89] found id: ""
	I0229 18:41:20.899514   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.899524   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:20.899531   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:20.899592   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:20.943653   45244 cri.go:89] found id: ""
	I0229 18:41:20.943685   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.943698   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:20.943705   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:20.943767   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:20.982599   45244 cri.go:89] found id: ""
	I0229 18:41:20.982625   45244 logs.go:276] 0 containers: []
	W0229 18:41:20.982636   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:20.982643   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:20.982706   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:21.020050   45244 cri.go:89] found id: ""
	I0229 18:41:21.020074   45244 logs.go:276] 0 containers: []
	W0229 18:41:21.020084   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:21.020091   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:21.020161   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:21.062376   45244 cri.go:89] found id: ""
	I0229 18:41:21.062398   45244 logs.go:276] 0 containers: []
	W0229 18:41:21.062406   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:21.062417   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:21.062430   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:21.112332   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:21.112373   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:21.172780   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:21.172823   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:21.226693   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:21.226724   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:21.243035   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:21.243072   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:21.320866   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:23.821129   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:23.836067   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:23.836140   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:23.882448   45244 cri.go:89] found id: ""
	I0229 18:41:23.882476   45244 logs.go:276] 0 containers: []
	W0229 18:41:23.882485   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:23.882492   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:23.882538   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:23.927169   45244 cri.go:89] found id: ""
	I0229 18:41:23.927198   45244 logs.go:276] 0 containers: []
	W0229 18:41:23.927208   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:23.927215   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:23.927282   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:23.969166   45244 cri.go:89] found id: ""
	I0229 18:41:23.969197   45244 logs.go:276] 0 containers: []
	W0229 18:41:23.969209   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:23.969217   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:23.969272   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:24.011383   45244 cri.go:89] found id: ""
	I0229 18:41:24.011415   45244 logs.go:276] 0 containers: []
	W0229 18:41:24.011427   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:24.011436   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:24.011497   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:24.053158   45244 cri.go:89] found id: ""
	I0229 18:41:24.053190   45244 logs.go:276] 0 containers: []
	W0229 18:41:24.053198   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:24.053203   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:24.053268   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:24.108720   45244 cri.go:89] found id: ""
	I0229 18:41:24.108753   45244 logs.go:276] 0 containers: []
	W0229 18:41:24.108764   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:24.108781   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:24.108856   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:24.189729   45244 cri.go:89] found id: ""
	I0229 18:41:24.189762   45244 logs.go:276] 0 containers: []
	W0229 18:41:24.189775   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:24.189782   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:24.189844   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:24.235331   45244 cri.go:89] found id: ""
	I0229 18:41:24.235364   45244 logs.go:276] 0 containers: []
	W0229 18:41:24.235376   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:24.235388   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:24.235403   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:24.286971   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:24.287025   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:24.345822   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:24.345868   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:24.362961   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:24.362991   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:24.439393   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:24.439418   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:24.439435   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:26.979800   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:26.993874   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:26.993930   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:27.036911   45244 cri.go:89] found id: ""
	I0229 18:41:27.036942   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.036954   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:27.036961   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:27.037032   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:27.076959   45244 cri.go:89] found id: ""
	I0229 18:41:27.076981   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.076989   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:27.076994   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:27.077041   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:27.133521   45244 cri.go:89] found id: ""
	I0229 18:41:27.133553   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.133564   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:27.133571   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:27.133638   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:27.173462   45244 cri.go:89] found id: ""
	I0229 18:41:27.173492   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.173503   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:27.173510   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:27.173579   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:27.226239   45244 cri.go:89] found id: ""
	I0229 18:41:27.226269   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.226288   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:27.226295   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:27.226354   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:27.267886   45244 cri.go:89] found id: ""
	I0229 18:41:27.267909   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.267919   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:27.267927   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:27.267995   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:27.306149   45244 cri.go:89] found id: ""
	I0229 18:41:27.306171   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.306178   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:27.306182   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:27.306229   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:27.346269   45244 cri.go:89] found id: ""
	I0229 18:41:27.346293   45244 logs.go:276] 0 containers: []
	W0229 18:41:27.346304   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:27.346317   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:27.346331   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:27.389673   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:27.389703   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:27.441584   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:27.441618   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:27.457664   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:27.457697   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:27.529336   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:27.529361   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:27.529386   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:30.064909   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:30.082034   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:30.082105   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:30.132446   45244 cri.go:89] found id: ""
	I0229 18:41:30.132472   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.132482   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:30.132490   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:30.132552   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:30.179602   45244 cri.go:89] found id: ""
	I0229 18:41:30.179639   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.179651   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:30.179658   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:30.179722   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:30.231861   45244 cri.go:89] found id: ""
	I0229 18:41:30.231887   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.231896   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:30.231901   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:30.231962   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:30.272370   45244 cri.go:89] found id: ""
	I0229 18:41:30.272399   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.272409   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:30.272418   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:30.272480   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:30.312600   45244 cri.go:89] found id: ""
	I0229 18:41:30.312629   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.312639   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:30.312647   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:30.312706   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:30.351009   45244 cri.go:89] found id: ""
	I0229 18:41:30.351036   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.351047   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:30.351054   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:30.351114   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:30.399769   45244 cri.go:89] found id: ""
	I0229 18:41:30.399800   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.399811   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:30.399818   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:30.399885   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:30.438928   45244 cri.go:89] found id: ""
	I0229 18:41:30.438957   45244 logs.go:276] 0 containers: []
	W0229 18:41:30.438966   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:30.438976   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:30.438991   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:30.454716   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:30.454748   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:30.527903   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:30.527926   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:30.527941   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:30.564909   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:30.564939   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:30.614698   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:30.614730   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:33.166521   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:33.185028   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:33.185107   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:33.228661   45244 cri.go:89] found id: ""
	I0229 18:41:33.228690   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.228700   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:33.228706   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:33.228753   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:33.265845   45244 cri.go:89] found id: ""
	I0229 18:41:33.265875   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.265885   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:33.265893   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:33.265952   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:33.307339   45244 cri.go:89] found id: ""
	I0229 18:41:33.307367   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.307376   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:33.307382   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:33.307440   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:33.349838   45244 cri.go:89] found id: ""
	I0229 18:41:33.349870   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.349881   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:33.349889   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:33.349950   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:33.388517   45244 cri.go:89] found id: ""
	I0229 18:41:33.388539   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.388547   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:33.388552   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:33.388601   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:33.425826   45244 cri.go:89] found id: ""
	I0229 18:41:33.425854   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.425862   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:33.425867   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:33.425920   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:33.463003   45244 cri.go:89] found id: ""
	I0229 18:41:33.463028   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.463035   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:33.463041   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:33.463100   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:33.507514   45244 cri.go:89] found id: ""
	I0229 18:41:33.507537   45244 logs.go:276] 0 containers: []
	W0229 18:41:33.507545   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:33.507553   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:33.507564   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:33.558393   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:33.558423   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:33.573954   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:33.573987   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:33.647014   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:33.647042   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:33.647060   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:33.683877   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:33.683906   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:36.231468   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:36.244966   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:36.245035   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:36.283924   45244 cri.go:89] found id: ""
	I0229 18:41:36.283949   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.283957   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:36.283962   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:36.284018   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:36.323603   45244 cri.go:89] found id: ""
	I0229 18:41:36.323643   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.323655   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:36.323663   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:36.323721   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:36.363350   45244 cri.go:89] found id: ""
	I0229 18:41:36.363375   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.363387   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:36.363396   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:36.363444   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:36.402445   45244 cri.go:89] found id: ""
	I0229 18:41:36.402474   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.402483   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:36.402489   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:36.402541   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:36.443550   45244 cri.go:89] found id: ""
	I0229 18:41:36.443573   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.443581   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:36.443587   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:36.443632   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:36.483644   45244 cri.go:89] found id: ""
	I0229 18:41:36.483669   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.483678   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:36.483684   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:36.483747   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:36.527428   45244 cri.go:89] found id: ""
	I0229 18:41:36.527455   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.527463   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:36.527468   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:36.527525   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:36.565015   45244 cri.go:89] found id: ""
	I0229 18:41:36.565037   45244 logs.go:276] 0 containers: []
	W0229 18:41:36.565045   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:36.565056   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:36.565071   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:36.602190   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:36.602219   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:36.646682   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:36.646716   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:36.697791   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:36.697824   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:36.712633   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:36.712659   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:36.785825   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:39.286520   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:39.301278   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:39.301353   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:39.344238   45244 cri.go:89] found id: ""
	I0229 18:41:39.344276   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.344297   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:39.344305   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:39.344368   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:39.382048   45244 cri.go:89] found id: ""
	I0229 18:41:39.382085   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.382094   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:39.382101   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:39.382160   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:39.422103   45244 cri.go:89] found id: ""
	I0229 18:41:39.422126   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.422134   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:39.422141   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:39.422195   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:39.458029   45244 cri.go:89] found id: ""
	I0229 18:41:39.458056   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.458065   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:39.458071   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:39.458117   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:39.492832   45244 cri.go:89] found id: ""
	I0229 18:41:39.492862   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.492871   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:39.492876   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:39.492925   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:39.529807   45244 cri.go:89] found id: ""
	I0229 18:41:39.529832   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.529840   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:39.529846   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:39.529890   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:39.569543   45244 cri.go:89] found id: ""
	I0229 18:41:39.569570   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.569580   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:39.569587   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:39.569644   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:39.610714   45244 cri.go:89] found id: ""
	I0229 18:41:39.610746   45244 logs.go:276] 0 containers: []
	W0229 18:41:39.610757   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:39.610768   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:39.610782   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:39.662602   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:39.662634   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:39.679028   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:39.679066   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:39.756004   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:39.756031   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:39.756046   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:39.793904   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:39.793935   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:42.354657   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:42.369560   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:42.369635   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:42.406612   45244 cri.go:89] found id: ""
	I0229 18:41:42.406638   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.406652   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:42.406660   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:42.406721   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:42.443999   45244 cri.go:89] found id: ""
	I0229 18:41:42.444030   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.444040   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:42.444046   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:42.444105   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:42.480348   45244 cri.go:89] found id: ""
	I0229 18:41:42.480378   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.480389   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:42.480397   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:42.480474   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:42.520963   45244 cri.go:89] found id: ""
	I0229 18:41:42.520995   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.521023   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:42.521030   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:42.521091   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:42.563634   45244 cri.go:89] found id: ""
	I0229 18:41:42.563660   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.563669   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:42.563674   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:42.563724   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:42.605799   45244 cri.go:89] found id: ""
	I0229 18:41:42.605830   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.605844   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:42.605851   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:42.605941   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:42.645638   45244 cri.go:89] found id: ""
	I0229 18:41:42.645666   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.645676   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:42.645684   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:42.645750   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:42.690264   45244 cri.go:89] found id: ""
	I0229 18:41:42.690297   45244 logs.go:276] 0 containers: []
	W0229 18:41:42.690319   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:42.690339   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:42.690357   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:42.744866   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:42.744899   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:42.760910   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:42.760941   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:42.839002   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:42.839029   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:42.839045   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:42.877183   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:42.877211   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:45.433811   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:45.449593   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:45.449665   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:45.488008   45244 cri.go:89] found id: ""
	I0229 18:41:45.488038   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.488049   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:45.488063   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:45.488124   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:45.528087   45244 cri.go:89] found id: ""
	I0229 18:41:45.528116   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.528127   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:45.528134   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:45.528199   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:45.569955   45244 cri.go:89] found id: ""
	I0229 18:41:45.569979   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.569987   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:45.569993   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:45.570066   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:45.607471   45244 cri.go:89] found id: ""
	I0229 18:41:45.607500   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.607511   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:45.607519   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:45.607576   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:45.646619   45244 cri.go:89] found id: ""
	I0229 18:41:45.646647   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.646658   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:45.646665   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:45.646747   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:45.684438   45244 cri.go:89] found id: ""
	I0229 18:41:45.684481   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.684492   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:45.684499   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:45.684612   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:45.729956   45244 cri.go:89] found id: ""
	I0229 18:41:45.729980   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.729989   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:45.729997   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:45.730058   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:45.771499   45244 cri.go:89] found id: ""
	I0229 18:41:45.771532   45244 logs.go:276] 0 containers: []
	W0229 18:41:45.771543   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:45.771555   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:45.771569   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:45.824057   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:45.824089   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:45.851184   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:45.851215   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:45.947237   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:45.947263   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:45.947279   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:45.984703   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:45.984733   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:48.528861   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:48.542839   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:48.542905   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:48.581085   45244 cri.go:89] found id: ""
	I0229 18:41:48.581111   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.581120   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:48.581126   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:48.581203   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:48.621800   45244 cri.go:89] found id: ""
	I0229 18:41:48.621826   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.621836   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:48.621843   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:48.621895   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:48.661868   45244 cri.go:89] found id: ""
	I0229 18:41:48.661896   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.661903   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:48.661909   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:48.661967   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:48.703815   45244 cri.go:89] found id: ""
	I0229 18:41:48.703843   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.703855   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:48.703862   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:48.703927   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:48.746359   45244 cri.go:89] found id: ""
	I0229 18:41:48.746384   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.746392   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:48.746398   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:48.746452   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:48.788307   45244 cri.go:89] found id: ""
	I0229 18:41:48.788335   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.788343   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:48.788348   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:48.788407   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:48.825711   45244 cri.go:89] found id: ""
	I0229 18:41:48.825736   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.825746   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:48.825753   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:48.825820   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:48.875555   45244 cri.go:89] found id: ""
	I0229 18:41:48.875584   45244 logs.go:276] 0 containers: []
	W0229 18:41:48.875601   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:48.875612   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:48.875625   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:48.937255   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:48.937301   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:48.956133   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:48.956158   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:49.031177   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:49.031207   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:49.031247   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:49.069024   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:49.069053   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:51.613285   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:51.627886   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:51.627959   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:51.673328   45244 cri.go:89] found id: ""
	I0229 18:41:51.673355   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.673367   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:51.673374   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:51.673431   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:51.713144   45244 cri.go:89] found id: ""
	I0229 18:41:51.713180   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.713200   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:51.713208   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:51.713276   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:51.752579   45244 cri.go:89] found id: ""
	I0229 18:41:51.752613   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.752626   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:51.752661   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:51.752731   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:51.796122   45244 cri.go:89] found id: ""
	I0229 18:41:51.796147   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.796157   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:51.796164   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:51.796226   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:51.834991   45244 cri.go:89] found id: ""
	I0229 18:41:51.835022   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.835034   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:51.835041   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:51.835106   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:51.902292   45244 cri.go:89] found id: ""
	I0229 18:41:51.902325   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.902336   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:51.902343   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:51.902422   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:51.958626   45244 cri.go:89] found id: ""
	I0229 18:41:51.958655   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.958670   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:51.958678   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:51.958738   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:51.996832   45244 cri.go:89] found id: ""
	I0229 18:41:51.996865   45244 logs.go:276] 0 containers: []
	W0229 18:41:51.996877   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:51.996899   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:51.996912   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:52.051929   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:52.051972   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:52.068159   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:52.068194   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:52.142730   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:52.142756   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:52.142775   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:52.179004   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:52.179038   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:54.730439   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:54.744874   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:54.744944   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:54.784428   45244 cri.go:89] found id: ""
	I0229 18:41:54.784457   45244 logs.go:276] 0 containers: []
	W0229 18:41:54.784466   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:54.784471   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:54.784531   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:54.825964   45244 cri.go:89] found id: ""
	I0229 18:41:54.825988   45244 logs.go:276] 0 containers: []
	W0229 18:41:54.825995   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:54.826000   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:54.826050   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:54.903206   45244 cri.go:89] found id: ""
	I0229 18:41:54.903237   45244 logs.go:276] 0 containers: []
	W0229 18:41:54.903245   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:54.903253   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:54.903305   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:54.967915   45244 cri.go:89] found id: ""
	I0229 18:41:54.967941   45244 logs.go:276] 0 containers: []
	W0229 18:41:54.967951   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:54.967959   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:54.968018   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:55.008325   45244 cri.go:89] found id: ""
	I0229 18:41:55.008350   45244 logs.go:276] 0 containers: []
	W0229 18:41:55.008358   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:55.008363   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:55.008434   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:55.050829   45244 cri.go:89] found id: ""
	I0229 18:41:55.050858   45244 logs.go:276] 0 containers: []
	W0229 18:41:55.050868   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:55.050875   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:55.050929   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:55.089628   45244 cri.go:89] found id: ""
	I0229 18:41:55.089651   45244 logs.go:276] 0 containers: []
	W0229 18:41:55.089659   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:55.089664   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:55.089734   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:55.131338   45244 cri.go:89] found id: ""
	I0229 18:41:55.131364   45244 logs.go:276] 0 containers: []
	W0229 18:41:55.131376   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:55.131388   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:55.131418   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:55.205453   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:55.205498   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:55.205516   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:55.240312   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:55.240339   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:55.283952   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:55.283982   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:55.338480   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:55.338510   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:57.854526   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:57.873220   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:41:57.873299   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:41:57.921139   45244 cri.go:89] found id: ""
	I0229 18:41:57.921166   45244 logs.go:276] 0 containers: []
	W0229 18:41:57.921177   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:57.921185   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:41:57.921235   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:41:57.977754   45244 cri.go:89] found id: ""
	I0229 18:41:57.977783   45244 logs.go:276] 0 containers: []
	W0229 18:41:57.977794   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:41:57.977801   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:41:57.977860   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:41:58.026712   45244 cri.go:89] found id: ""
	I0229 18:41:58.026740   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.026751   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:41:58.026758   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:41:58.026827   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:41:58.067441   45244 cri.go:89] found id: ""
	I0229 18:41:58.067460   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.067469   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:58.067476   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:41:58.067529   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:41:58.102939   45244 cri.go:89] found id: ""
	I0229 18:41:58.102967   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.102975   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:58.102981   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:41:58.103037   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:41:58.139119   45244 cri.go:89] found id: ""
	I0229 18:41:58.139148   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.139156   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:58.139168   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:41:58.139227   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:41:58.175087   45244 cri.go:89] found id: ""
	I0229 18:41:58.175115   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.175125   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:58.175132   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:41:58.175225   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:41:58.212460   45244 cri.go:89] found id: ""
	I0229 18:41:58.212485   45244 logs.go:276] 0 containers: []
	W0229 18:41:58.212496   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:58.212507   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:58.212524   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:58.294742   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:58.294768   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:41:58.294784   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:41:58.332278   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:41:58.332307   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:58.377574   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:58.377603   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:58.448644   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:58.448672   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:00.968936   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:00.985645   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:00.985780   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:01.030082   45244 cri.go:89] found id: ""
	I0229 18:42:01.030105   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.030113   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:01.030118   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:01.030164   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:01.075317   45244 cri.go:89] found id: ""
	I0229 18:42:01.075342   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.075352   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:01.075360   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:01.075422   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:01.119855   45244 cri.go:89] found id: ""
	I0229 18:42:01.119878   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.119885   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:01.119890   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:01.119953   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:01.160563   45244 cri.go:89] found id: ""
	I0229 18:42:01.160587   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.160596   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:01.160601   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:01.160658   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:01.199735   45244 cri.go:89] found id: ""
	I0229 18:42:01.199767   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.199778   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:01.199785   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:01.199851   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:01.236962   45244 cri.go:89] found id: ""
	I0229 18:42:01.236991   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.236999   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:01.237005   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:01.237053   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:01.280631   45244 cri.go:89] found id: ""
	I0229 18:42:01.280657   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.280672   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:01.280678   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:01.280730   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:01.327315   45244 cri.go:89] found id: ""
	I0229 18:42:01.327341   45244 logs.go:276] 0 containers: []
	W0229 18:42:01.327353   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:01.327363   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:01.327375   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:01.380648   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:01.380680   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:01.395471   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:01.395494   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:01.465149   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:01.465166   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:01.465179   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:01.499264   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:01.499301   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:04.040946   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:04.056806   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:04.056871   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:04.106588   45244 cri.go:89] found id: ""
	I0229 18:42:04.106620   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.106631   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:04.106641   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:04.106704   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:04.146718   45244 cri.go:89] found id: ""
	I0229 18:42:04.146751   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.146763   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:04.146770   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:04.146824   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:04.186853   45244 cri.go:89] found id: ""
	I0229 18:42:04.186882   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.186890   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:04.186896   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:04.186978   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:04.226126   45244 cri.go:89] found id: ""
	I0229 18:42:04.226156   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.226167   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:04.226173   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:04.226240   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:04.274022   45244 cri.go:89] found id: ""
	I0229 18:42:04.274054   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.274065   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:04.274072   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:04.274149   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:04.312925   45244 cri.go:89] found id: ""
	I0229 18:42:04.312945   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.312953   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:04.312958   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:04.313003   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:04.352478   45244 cri.go:89] found id: ""
	I0229 18:42:04.352505   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.352516   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:04.352523   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:04.352614   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:04.394465   45244 cri.go:89] found id: ""
	I0229 18:42:04.394492   45244 logs.go:276] 0 containers: []
	W0229 18:42:04.394500   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:04.394507   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:04.394518   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:04.451698   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:04.451734   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:04.467509   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:04.467531   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:04.544051   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:04.544072   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:04.544084   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:04.580228   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:04.580261   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:07.155753   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:07.171171   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:07.171229   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:07.212581   45244 cri.go:89] found id: ""
	I0229 18:42:07.212611   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.212628   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:07.212635   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:07.212690   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:07.252127   45244 cri.go:89] found id: ""
	I0229 18:42:07.252155   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.252167   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:07.252174   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:07.252223   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:07.298906   45244 cri.go:89] found id: ""
	I0229 18:42:07.298936   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.298946   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:07.298953   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:07.299014   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:07.340108   45244 cri.go:89] found id: ""
	I0229 18:42:07.340142   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.340153   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:07.340160   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:07.340220   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:07.380729   45244 cri.go:89] found id: ""
	I0229 18:42:07.380758   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.380770   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:07.380777   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:07.380826   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:07.417897   45244 cri.go:89] found id: ""
	I0229 18:42:07.417927   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.417938   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:07.417945   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:07.418012   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:07.460236   45244 cri.go:89] found id: ""
	I0229 18:42:07.460266   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.460277   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:07.460285   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:07.460369   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:07.497687   45244 cri.go:89] found id: ""
	I0229 18:42:07.497716   45244 logs.go:276] 0 containers: []
	W0229 18:42:07.497727   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:07.497737   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:07.497755   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:07.573184   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:07.573206   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:07.573228   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:07.620873   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:07.620917   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:07.683631   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:07.683666   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:07.738726   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:07.738765   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:10.254309   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:10.271684   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:10.271762   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:10.315990   45244 cri.go:89] found id: ""
	I0229 18:42:10.316015   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.316023   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:10.316028   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:10.316080   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:10.358226   45244 cri.go:89] found id: ""
	I0229 18:42:10.358252   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.358259   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:10.358273   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:10.358320   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:10.405173   45244 cri.go:89] found id: ""
	I0229 18:42:10.405200   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.405212   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:10.405220   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:10.405284   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:10.444618   45244 cri.go:89] found id: ""
	I0229 18:42:10.444656   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.444665   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:10.444669   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:10.444717   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:10.481401   45244 cri.go:89] found id: ""
	I0229 18:42:10.481429   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.481439   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:10.481444   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:10.481489   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:10.517577   45244 cri.go:89] found id: ""
	I0229 18:42:10.517617   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.517628   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:10.517636   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:10.517698   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:10.558557   45244 cri.go:89] found id: ""
	I0229 18:42:10.558586   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.558597   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:10.558604   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:10.558668   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:10.624124   45244 cri.go:89] found id: ""
	I0229 18:42:10.624154   45244 logs.go:276] 0 containers: []
	W0229 18:42:10.624164   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:10.624176   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:10.624200   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:10.714199   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:10.714222   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:10.714234   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:10.752821   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:10.752851   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:10.792824   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:10.792850   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:10.843758   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:10.843788   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:13.359717   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:13.377113   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:13.377197   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:13.413847   45244 cri.go:89] found id: ""
	I0229 18:42:13.413875   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.413886   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:13.413894   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:13.413949   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:13.448921   45244 cri.go:89] found id: ""
	I0229 18:42:13.448951   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.448961   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:13.448968   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:13.449035   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:13.485999   45244 cri.go:89] found id: ""
	I0229 18:42:13.486028   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.486038   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:13.486045   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:13.486111   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:13.522491   45244 cri.go:89] found id: ""
	I0229 18:42:13.522520   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.522531   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:13.522538   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:13.522605   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:13.562156   45244 cri.go:89] found id: ""
	I0229 18:42:13.562183   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.562191   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:13.562197   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:13.562242   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:13.635303   45244 cri.go:89] found id: ""
	I0229 18:42:13.635332   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.635340   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:13.635346   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:13.635401   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:13.689044   45244 cri.go:89] found id: ""
	I0229 18:42:13.689074   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.689085   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:13.689091   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:13.689142   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:13.730899   45244 cri.go:89] found id: ""
	I0229 18:42:13.730928   45244 logs.go:276] 0 containers: []
	W0229 18:42:13.730939   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:13.730948   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:13.730958   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:13.766628   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:13.766655   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:13.816813   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:13.816837   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:13.868194   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:13.868225   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:13.884852   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:13.884879   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:13.955147   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:16.456299   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:16.470721   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:16.470792   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:16.514770   45244 cri.go:89] found id: ""
	I0229 18:42:16.514799   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.514812   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:16.514819   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:16.514878   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:16.558245   45244 cri.go:89] found id: ""
	I0229 18:42:16.558283   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.558295   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:16.558302   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:16.558362   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:16.612081   45244 cri.go:89] found id: ""
	I0229 18:42:16.612109   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.612117   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:16.612124   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:16.612181   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:16.658332   45244 cri.go:89] found id: ""
	I0229 18:42:16.658366   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.658375   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:16.658381   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:16.658440   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:16.705701   45244 cri.go:89] found id: ""
	I0229 18:42:16.705733   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.705744   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:16.705752   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:16.705814   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:16.743979   45244 cri.go:89] found id: ""
	I0229 18:42:16.744007   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.744028   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:16.744037   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:16.744117   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:16.783270   45244 cri.go:89] found id: ""
	I0229 18:42:16.783299   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.783306   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:16.783312   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:16.783378   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:16.826321   45244 cri.go:89] found id: ""
	I0229 18:42:16.826344   45244 logs.go:276] 0 containers: []
	W0229 18:42:16.826353   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:16.826362   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:16.826375   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:16.872807   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:16.872847   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:16.922861   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:16.922900   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:16.938120   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:16.938142   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:17.010937   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:17.010980   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:17.010995   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:19.549846   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:19.567143   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:19.567224   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:19.616175   45244 cri.go:89] found id: ""
	I0229 18:42:19.616256   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.616273   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:19.616282   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:19.616346   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:19.677784   45244 cri.go:89] found id: ""
	I0229 18:42:19.677810   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.677822   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:19.677830   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:19.677889   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:19.719501   45244 cri.go:89] found id: ""
	I0229 18:42:19.719529   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.719540   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:19.719556   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:19.719620   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:19.761035   45244 cri.go:89] found id: ""
	I0229 18:42:19.761065   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.761077   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:19.761085   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:19.761153   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:19.799985   45244 cri.go:89] found id: ""
	I0229 18:42:19.800009   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.800019   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:19.800027   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:19.800089   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:19.839978   45244 cri.go:89] found id: ""
	I0229 18:42:19.840008   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.840018   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:19.840026   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:19.840095   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:19.882907   45244 cri.go:89] found id: ""
	I0229 18:42:19.882935   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.882943   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:19.882949   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:19.883002   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:19.927709   45244 cri.go:89] found id: ""
	I0229 18:42:19.927733   45244 logs.go:276] 0 containers: []
	W0229 18:42:19.927742   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:19.927751   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:19.927763   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:19.978950   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:19.978989   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:19.995612   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:19.995645   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:20.077897   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:20.077926   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:20.077942   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:20.113364   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:20.113399   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:22.663447   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:22.691784   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:22.691864   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:22.745738   45244 cri.go:89] found id: ""
	I0229 18:42:22.745770   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.745780   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:22.745787   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:22.745855   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:22.786260   45244 cri.go:89] found id: ""
	I0229 18:42:22.786283   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.786293   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:22.786301   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:22.786361   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:22.830239   45244 cri.go:89] found id: ""
	I0229 18:42:22.830266   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.830277   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:22.830284   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:22.830351   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:22.874890   45244 cri.go:89] found id: ""
	I0229 18:42:22.874914   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.874925   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:22.874933   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:22.874991   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:22.915117   45244 cri.go:89] found id: ""
	I0229 18:42:22.915145   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.915157   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:22.915164   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:22.915228   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:22.959386   45244 cri.go:89] found id: ""
	I0229 18:42:22.959416   45244 logs.go:276] 0 containers: []
	W0229 18:42:22.959426   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:22.959432   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:22.959507   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:23.000537   45244 cri.go:89] found id: ""
	I0229 18:42:23.000561   45244 logs.go:276] 0 containers: []
	W0229 18:42:23.000572   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:23.000581   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:23.000642   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:23.038355   45244 cri.go:89] found id: ""
	I0229 18:42:23.038437   45244 logs.go:276] 0 containers: []
	W0229 18:42:23.038456   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:23.038470   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:23.038487   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:23.095260   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:23.095290   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:23.111301   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:23.111332   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:23.179725   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:23.179752   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:23.179769   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:23.216989   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:23.217016   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:25.765718   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:25.781282   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:25.781342   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:25.824723   45244 cri.go:89] found id: ""
	I0229 18:42:25.824747   45244 logs.go:276] 0 containers: []
	W0229 18:42:25.824754   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:25.824759   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:25.824808   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:25.875176   45244 cri.go:89] found id: ""
	I0229 18:42:25.875207   45244 logs.go:276] 0 containers: []
	W0229 18:42:25.875217   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:25.875223   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:25.875289   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:25.920091   45244 cri.go:89] found id: ""
	I0229 18:42:25.920115   45244 logs.go:276] 0 containers: []
	W0229 18:42:25.920123   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:25.920128   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:25.920180   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:25.958652   45244 cri.go:89] found id: ""
	I0229 18:42:25.958699   45244 logs.go:276] 0 containers: []
	W0229 18:42:25.958711   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:25.958726   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:25.958787   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:25.997455   45244 cri.go:89] found id: ""
	I0229 18:42:25.997487   45244 logs.go:276] 0 containers: []
	W0229 18:42:25.997498   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:25.997506   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:25.997567   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:26.041800   45244 cri.go:89] found id: ""
	I0229 18:42:26.041827   45244 logs.go:276] 0 containers: []
	W0229 18:42:26.041837   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:26.041850   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:26.041912   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:26.083439   45244 cri.go:89] found id: ""
	I0229 18:42:26.083471   45244 logs.go:276] 0 containers: []
	W0229 18:42:26.083480   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:26.083485   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:26.083538   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:26.127770   45244 cri.go:89] found id: ""
	I0229 18:42:26.127798   45244 logs.go:276] 0 containers: []
	W0229 18:42:26.127806   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:26.127815   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:26.127829   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:26.165262   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:26.165302   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:26.212631   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:26.212663   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:26.270265   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:26.270296   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:26.287336   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:26.287360   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:26.388342   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:28.889054   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:28.903630   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:28.903695   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:28.941035   45244 cri.go:89] found id: ""
	I0229 18:42:28.941072   45244 logs.go:276] 0 containers: []
	W0229 18:42:28.941084   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:28.941092   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:28.941153   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:28.990111   45244 cri.go:89] found id: ""
	I0229 18:42:28.990141   45244 logs.go:276] 0 containers: []
	W0229 18:42:28.990152   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:28.990160   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:28.990219   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:29.038438   45244 cri.go:89] found id: ""
	I0229 18:42:29.038465   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.038475   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:29.038481   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:29.038540   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:29.082756   45244 cri.go:89] found id: ""
	I0229 18:42:29.082780   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.082790   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:29.082798   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:29.082856   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:29.120600   45244 cri.go:89] found id: ""
	I0229 18:42:29.120633   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.120644   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:29.120652   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:29.120713   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:29.163043   45244 cri.go:89] found id: ""
	I0229 18:42:29.163068   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.163077   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:29.163083   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:29.163142   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:29.205103   45244 cri.go:89] found id: ""
	I0229 18:42:29.205134   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.205145   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:29.205153   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:29.205212   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:29.253536   45244 cri.go:89] found id: ""
	I0229 18:42:29.253576   45244 logs.go:276] 0 containers: []
	W0229 18:42:29.253587   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:29.253598   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:29.253612   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:29.314347   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:29.314377   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:29.344038   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:29.344073   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:29.456052   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:29.456076   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:29.456088   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:29.490880   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:29.490912   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:32.044054   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:32.059436   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:32.059503   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:32.096674   45244 cri.go:89] found id: ""
	I0229 18:42:32.096703   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.096714   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:32.096722   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:32.096781   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:32.135085   45244 cri.go:89] found id: ""
	I0229 18:42:32.135110   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.135120   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:32.135129   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:32.135187   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:32.171995   45244 cri.go:89] found id: ""
	I0229 18:42:32.172031   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.172049   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:32.172056   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:32.172116   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:32.212362   45244 cri.go:89] found id: ""
	I0229 18:42:32.212392   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.212404   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:32.212412   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:32.212471   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:32.253326   45244 cri.go:89] found id: ""
	I0229 18:42:32.253357   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.253369   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:32.253376   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:32.253432   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:32.293963   45244 cri.go:89] found id: ""
	I0229 18:42:32.293990   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.294000   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:32.294008   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:32.294089   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:32.334472   45244 cri.go:89] found id: ""
	I0229 18:42:32.334501   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.334512   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:32.334520   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:32.334592   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:32.393455   45244 cri.go:89] found id: ""
	I0229 18:42:32.393485   45244 logs.go:276] 0 containers: []
	W0229 18:42:32.393496   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:32.393507   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:32.393521   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:32.449890   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:32.449935   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:32.464844   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:32.464874   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:32.534033   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:32.534058   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:32.534081   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:32.570115   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:32.570143   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:35.123926   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:35.138782   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:35.138859   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:35.178971   45244 cri.go:89] found id: ""
	I0229 18:42:35.179000   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.179010   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:35.179022   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:35.179081   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:35.220787   45244 cri.go:89] found id: ""
	I0229 18:42:35.220808   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.220816   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:35.220821   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:35.220869   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:35.267370   45244 cri.go:89] found id: ""
	I0229 18:42:35.267401   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.267410   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:35.267417   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:35.267476   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:35.308556   45244 cri.go:89] found id: ""
	I0229 18:42:35.308586   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.308612   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:35.308621   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:35.308682   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:35.352260   45244 cri.go:89] found id: ""
	I0229 18:42:35.352315   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.352325   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:35.352331   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:35.352409   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:35.407987   45244 cri.go:89] found id: ""
	I0229 18:42:35.408021   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.408033   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:35.408041   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:35.408127   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:35.469345   45244 cri.go:89] found id: ""
	I0229 18:42:35.469431   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.469454   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:35.469472   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:35.469587   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:35.517190   45244 cri.go:89] found id: ""
	I0229 18:42:35.517219   45244 logs.go:276] 0 containers: []
	W0229 18:42:35.517229   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:35.517240   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:35.517255   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:35.569548   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:35.569589   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:35.587653   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:35.587678   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:35.669375   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:35.669401   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:35.669418   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:35.718250   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:35.718305   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:38.270300   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:38.287367   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:38.287448   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:38.325786   45244 cri.go:89] found id: ""
	I0229 18:42:38.325816   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.325826   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:38.325833   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:38.325896   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:38.379633   45244 cri.go:89] found id: ""
	I0229 18:42:38.379663   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.379673   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:38.379681   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:38.379742   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:38.436965   45244 cri.go:89] found id: ""
	I0229 18:42:38.436994   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.437004   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:38.437012   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:38.437071   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:38.487464   45244 cri.go:89] found id: ""
	I0229 18:42:38.487494   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.487507   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:38.487514   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:38.487575   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:38.526749   45244 cri.go:89] found id: ""
	I0229 18:42:38.526781   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.526792   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:38.526799   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:38.526879   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:38.568205   45244 cri.go:89] found id: ""
	I0229 18:42:38.568238   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.568249   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:38.568257   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:38.568320   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:38.607146   45244 cri.go:89] found id: ""
	I0229 18:42:38.607177   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.607189   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:38.607199   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:38.607264   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:38.649356   45244 cri.go:89] found id: ""
	I0229 18:42:38.649385   45244 logs.go:276] 0 containers: []
	W0229 18:42:38.649397   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:38.649407   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:38.649422   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:38.700709   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:38.700743   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:38.718840   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:38.718867   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:38.802888   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:38.802914   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:38.802928   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:38.842500   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:38.842524   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:41.397099   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:41.411904   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:41.411982   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:41.476220   45244 cri.go:89] found id: ""
	I0229 18:42:41.476249   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.476260   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:41.476268   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:41.476335   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:41.520169   45244 cri.go:89] found id: ""
	I0229 18:42:41.520198   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.520210   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:41.520217   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:41.520283   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:41.560993   45244 cri.go:89] found id: ""
	I0229 18:42:41.561023   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.561034   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:41.561042   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:41.561100   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:41.602498   45244 cri.go:89] found id: ""
	I0229 18:42:41.602528   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.602540   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:41.602565   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:41.602628   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:41.642416   45244 cri.go:89] found id: ""
	I0229 18:42:41.642448   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.642459   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:41.642466   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:41.642519   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:41.681542   45244 cri.go:89] found id: ""
	I0229 18:42:41.681572   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.681583   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:41.681598   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:41.681662   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:41.726191   45244 cri.go:89] found id: ""
	I0229 18:42:41.726215   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.726223   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:41.726229   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:41.726278   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:41.765037   45244 cri.go:89] found id: ""
	I0229 18:42:41.765071   45244 logs.go:276] 0 containers: []
	W0229 18:42:41.765082   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:41.765092   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:41.765108   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:41.814801   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:41.814836   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:41.830231   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:41.830259   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:41.912475   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:41.912503   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:41.912516   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:41.954224   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:41.954258   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:44.498342   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:44.512056   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:42:44.512118   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:42:44.560516   45244 cri.go:89] found id: ""
	I0229 18:42:44.560550   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.560561   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:44.560569   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:42:44.560630   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:42:44.605689   45244 cri.go:89] found id: ""
	I0229 18:42:44.605715   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.605726   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:42:44.605733   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:42:44.605809   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:42:44.641756   45244 cri.go:89] found id: ""
	I0229 18:42:44.641780   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.641789   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:42:44.641797   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:42:44.641856   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:42:44.679738   45244 cri.go:89] found id: ""
	I0229 18:42:44.679764   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.679773   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:44.679778   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:42:44.679823   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:42:44.719499   45244 cri.go:89] found id: ""
	I0229 18:42:44.719532   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.719544   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:44.719551   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:42:44.719613   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:42:44.757645   45244 cri.go:89] found id: ""
	I0229 18:42:44.757668   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.757680   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:44.757686   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:42:44.757812   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:42:44.800039   45244 cri.go:89] found id: ""
	I0229 18:42:44.800077   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.800088   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:44.800095   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:42:44.800152   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:42:44.840001   45244 cri.go:89] found id: ""
	I0229 18:42:44.840036   45244 logs.go:276] 0 containers: []
	W0229 18:42:44.840047   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:44.840067   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:44.840081   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:44.889772   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:44.889810   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:44.909808   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:44.909841   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:44.991901   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:44.991928   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:42:44.991957   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:42:45.028408   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:42:45.028445   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:47.582662   45244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:47.598413   45244 kubeadm.go:640] restartCluster took 4m11.888545977s
	W0229 18:42:47.598479   45244 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 18:42:47.598503   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 18:42:48.060667   45244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:42:48.080893   45244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:42:48.092068   45244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:42:48.103461   45244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:42:48.103494   45244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:42:48.167705   45244 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:42:48.167802   45244 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:42:48.337746   45244 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:42:48.337907   45244 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:42:48.338001   45244 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:42:48.571443   45244 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:42:48.572746   45244 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:42:48.581907   45244 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:42:48.724589   45244 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:42:48.726416   45244 out.go:204]   - Generating certificates and keys ...
	I0229 18:42:48.726516   45244 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:42:48.726637   45244 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:42:48.726764   45244 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:42:48.726861   45244 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:42:48.726975   45244 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:42:48.727071   45244 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:42:48.727172   45244 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:42:48.727265   45244 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:42:48.727380   45244 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:42:48.727484   45244 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:42:48.727608   45244 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:42:48.727723   45244 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:42:48.862239   45244 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:42:49.211240   45244 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:42:49.411232   45244 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:42:49.493080   45244 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:42:49.494086   45244 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:42:49.495680   45244 out.go:204]   - Booting up control plane ...
	I0229 18:42:49.495789   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:42:49.501060   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:42:49.503468   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:42:49.505090   45244 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:42:49.508900   45244 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:43:29.508834   45244 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:43:29.510059   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:29.510384   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:34.511135   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:34.511390   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:43:44.511572   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:43:44.511862   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:44:04.512484   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:44:04.513019   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:44:44.514465   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:44:44.514765   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:44:44.514801   45244 kubeadm.go:322] 
	I0229 18:44:44.514855   45244 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:44:44.514911   45244 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:44:44.514925   45244 kubeadm.go:322] 
	I0229 18:44:44.514987   45244 kubeadm.go:322] This error is likely caused by:
	I0229 18:44:44.515033   45244 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:44:44.515165   45244 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:44:44.515180   45244 kubeadm.go:322] 
	I0229 18:44:44.515325   45244 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:44:44.515380   45244 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:44:44.515430   45244 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:44:44.515437   45244 kubeadm.go:322] 
	I0229 18:44:44.515583   45244 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:44:44.515726   45244 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:44:44.515843   45244 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:44:44.515903   45244 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:44:44.515991   45244 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:44:44.516048   45244 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:44:44.516993   45244 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:44:44.517121   45244 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:44:44.517215   45244 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:44:44.517380   45244 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:44:44.517427   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0229 18:44:45.014260   45244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:45.036623   45244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:45.052492   45244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:45.052537   45244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:44:45.322089   45244 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:46:41.851807   45244 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:41.851974   45244 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:46:41.853689   45244 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:41.853746   45244 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:41.853843   45244 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:41.853991   45244 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:41.854132   45244 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:41.854295   45244 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:41.854409   45244 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:41.854495   45244 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:41.854606   45244 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:41.856466   45244 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:41.856560   45244 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:41.856653   45244 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:41.856765   45244 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:41.856861   45244 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:41.856967   45244 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:41.857052   45244 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:41.857135   45244 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:41.857209   45244 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:41.857290   45244 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:41.857381   45244 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:41.857441   45244 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:41.857523   45244 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:41.857606   45244 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:41.857699   45244 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:41.857777   45244 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:41.857827   45244 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:41.857886   45244 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:41.859219   45244 out.go:204]   - Booting up control plane ...
	I0229 18:46:41.859312   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:41.859400   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:41.859458   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:41.859547   45244 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:41.859727   45244 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:46:41.859795   45244 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:46:41.859869   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860155   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860236   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860476   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860583   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860796   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860896   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.861109   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.861212   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.861426   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.861438   45244 kubeadm.go:322] 
	I0229 18:46:41.861474   45244 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:41.861508   45244 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:41.861515   45244 kubeadm.go:322] 
	I0229 18:46:41.861542   45244 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:41.861574   45244 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:41.861691   45244 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:41.861703   45244 kubeadm.go:322] 
	I0229 18:46:41.861847   45244 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:41.861898   45244 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:41.861947   45244 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:41.861957   45244 kubeadm.go:322] 
	I0229 18:46:41.862088   45244 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:41.862219   45244 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:41.862337   45244 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:41.862416   45244 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:41.862530   45244 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:41.862653   45244 kubeadm.go:406] StartCluster complete in 8m6.209733519s
	I0229 18:46:41.862678   45244 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:41.862717   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:46:41.862784   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:46:41.922997   45244 cri.go:89] found id: ""
	I0229 18:46:41.923026   45244 logs.go:276] 0 containers: []
	W0229 18:46:41.923038   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:46:41.923046   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:46:41.923115   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:46:41.974402   45244 cri.go:89] found id: ""
	I0229 18:46:41.974433   45244 logs.go:276] 0 containers: []
	W0229 18:46:41.974445   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:46:41.974452   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:46:41.974529   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:46:42.045238   45244 cri.go:89] found id: ""
	I0229 18:46:42.045265   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.045276   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:46:42.045283   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:46:42.045350   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:46:42.088338   45244 cri.go:89] found id: ""
	I0229 18:46:42.088365   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.088376   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:46:42.088384   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:46:42.088450   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:46:42.129386   45244 cri.go:89] found id: ""
	I0229 18:46:42.129416   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.129428   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:46:42.129435   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:46:42.129502   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:46:42.171873   45244 cri.go:89] found id: ""
	I0229 18:46:42.171894   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.171902   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:46:42.171908   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:46:42.171958   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:46:42.211632   45244 cri.go:89] found id: ""
	I0229 18:46:42.211656   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.211664   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:46:42.211669   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:46:42.211729   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:46:42.261816   45244 cri.go:89] found id: ""
	I0229 18:46:42.261837   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.261844   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:46:42.261852   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:46:42.261863   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:46:42.313140   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:46:42.313173   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:46:42.327911   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:46:42.327944   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:46:42.411111   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:46:42.411164   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:46:42.411177   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:46:42.456959   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:46:42.457002   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:46:42.508698   45244 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:42.508753   45244 out.go:239] * 
	* 
	W0229 18:46:42.508820   45244 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:46:42.508841   45244 out.go:239] * 
	* 
	W0229 18:46:42.509757   45244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:46:42.512698   45244 out.go:177] 
	W0229 18:46:42.514014   45244 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:46:42.514077   45244 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:46:42.514104   45244 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:46:42.515555   45244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-561577 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (296.383361ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-561577 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-462109 --memory=2200 --alsologtostderr   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-462109             | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | no-preload-644659 image list                           | no-preload-644659            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-644659                                   | no-preload-644659            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-462109                                   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-644659                                   | no-preload-644659            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-462109                  | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-462109 --memory=2200 --alsologtostderr   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:45 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-644659                                   | no-preload-644659            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| delete  | -p no-preload-644659                                   | no-preload-644659            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| start   | -p auto-387000 --memory=3072                           | auto-387000                  | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:46 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| image   | newest-cni-462109 image list                           | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-462109                                   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-462109                                   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-462109                                   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| delete  | -p newest-cni-462109                                   | newest-cni-462109            | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:45 UTC |
	| start   | -p kindnet-387000                                      | kindnet-387000               | jenkins | v1.32.0 | 29 Feb 24 18:45 UTC | 29 Feb 24 18:46 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-459722                           | default-k8s-diff-port-459722 | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-459722 | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | default-k8s-diff-port-459722                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-459722 | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | default-k8s-diff-port-459722                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-459722 | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | default-k8s-diff-port-459722                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-459722 | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | default-k8s-diff-port-459722                           |                              |         |         |                     |                     |
	| start   | -p calico-387000 --memory=3072                         | calico-387000                | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | -p kindnet-387000 pgrep -a                             | kindnet-387000               | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-387000 pgrep -a                                | auto-387000                  | jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:46 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:46:05
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:46:05.005531   49712 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:46:05.005669   49712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:05.005681   49712 out.go:304] Setting ErrFile to fd 2...
	I0229 18:46:05.005688   49712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:46:05.005987   49712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:46:05.006812   49712 out.go:298] Setting JSON to false
	I0229 18:46:05.008127   49712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5306,"bootTime":1709227059,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:46:05.008209   49712 start.go:139] virtualization: kvm guest
	I0229 18:46:05.010664   49712 out.go:177] * [calico-387000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:46:05.012092   49712 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:46:05.013396   49712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:46:05.012122   49712 notify.go:220] Checking for updates...
	I0229 18:46:05.014801   49712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:46:05.016235   49712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:46:05.017532   49712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:46:05.018785   49712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:46:05.020683   49712 config.go:182] Loaded profile config "auto-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:46:05.020823   49712 config.go:182] Loaded profile config "kindnet-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:46:05.020955   49712 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:46:05.021055   49712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:46:05.056678   49712 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:46:05.058005   49712 start.go:299] selected driver: kvm2
	I0229 18:46:05.058021   49712 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:46:05.058033   49712 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:46:05.059018   49712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:05.059107   49712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:46:05.073856   49712 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:46:05.073912   49712 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:46:05.074138   49712 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:46:05.074214   49712 cni.go:84] Creating CNI manager for "calico"
	I0229 18:46:05.074229   49712 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0229 18:46:05.074243   49712 start_flags.go:323] config:
	{Name:calico-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:46:05.074393   49712 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:46:05.076188   49712 out.go:177] * Starting control plane node calico-387000 in cluster calico-387000
	I0229 18:46:05.077348   49712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:46:05.077383   49712 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 18:46:05.077393   49712 cache.go:56] Caching tarball of preloaded images
	I0229 18:46:05.077480   49712 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:46:05.077491   49712 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 18:46:05.077591   49712 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/config.json ...
	I0229 18:46:05.077614   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/config.json: {Name:mke21f6985a2f409581131f7886b20d218f1e86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:05.077768   49712 start.go:365] acquiring machines lock for calico-387000: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:46:05.077804   49712 start.go:369] acquired machines lock for "calico-387000" in 20.133µs
	I0229 18:46:05.077824   49712 start.go:93] Provisioning new machine with config: &{Name:calico-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:46:05.077912   49712 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:46:01.003956   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:01.504799   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:02.004701   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:02.504570   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:03.004541   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:03.504607   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:04.004142   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:04.504874   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:05.004596   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:05.504850   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:05.857588   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:07.857824   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:05.079492   49712 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 18:46:05.079627   49712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:05.079673   49712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:05.093927   49712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40981
	I0229 18:46:05.094370   49712 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:05.094910   49712 main.go:141] libmachine: Using API Version  1
	I0229 18:46:05.094929   49712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:05.095258   49712 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:05.095435   49712 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I0229 18:46:05.095599   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:05.095764   49712 start.go:159] libmachine.API.Create for "calico-387000" (driver="kvm2")
	I0229 18:46:05.095807   49712 client.go:168] LocalClient.Create starting
	I0229 18:46:05.095841   49712 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 18:46:05.095880   49712 main.go:141] libmachine: Decoding PEM data...
	I0229 18:46:05.095898   49712 main.go:141] libmachine: Parsing certificate...
	I0229 18:46:05.095947   49712 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 18:46:05.095966   49712 main.go:141] libmachine: Decoding PEM data...
	I0229 18:46:05.095978   49712 main.go:141] libmachine: Parsing certificate...
	I0229 18:46:05.095993   49712 main.go:141] libmachine: Running pre-create checks...
	I0229 18:46:05.095999   49712 main.go:141] libmachine: (calico-387000) Calling .PreCreateCheck
	I0229 18:46:05.096400   49712 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I0229 18:46:05.096788   49712 main.go:141] libmachine: Creating machine...
	I0229 18:46:05.096801   49712 main.go:141] libmachine: (calico-387000) Calling .Create
	I0229 18:46:05.096926   49712 main.go:141] libmachine: (calico-387000) Creating KVM machine...
	I0229 18:46:05.098276   49712 main.go:141] libmachine: (calico-387000) DBG | found existing default KVM network
	I0229 18:46:05.099534   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.099360   49735 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:28:6a} reservation:<nil>}
	I0229 18:46:05.100776   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.100694   49735 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6750}
	I0229 18:46:05.106061   49712 main.go:141] libmachine: (calico-387000) DBG | trying to create private KVM network mk-calico-387000 192.168.50.0/24...
	I0229 18:46:05.180072   49712 main.go:141] libmachine: (calico-387000) DBG | private KVM network mk-calico-387000 192.168.50.0/24 created
	I0229 18:46:05.180110   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.180029   49735 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:46:05.180183   49712 main.go:141] libmachine: (calico-387000) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000 ...
	I0229 18:46:05.180234   49712 main.go:141] libmachine: (calico-387000) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:46:05.180271   49712 main.go:141] libmachine: (calico-387000) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:46:05.405448   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.405323   49735 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa...
	I0229 18:46:05.541790   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.541627   49735 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/calico-387000.rawdisk...
	I0229 18:46:05.541826   49712 main.go:141] libmachine: (calico-387000) DBG | Writing magic tar header
	I0229 18:46:05.541841   49712 main.go:141] libmachine: (calico-387000) DBG | Writing SSH key tar header
	I0229 18:46:05.541854   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:05.541773   49735 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000 ...
	I0229 18:46:05.541941   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000
	I0229 18:46:05.541975   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 18:46:05.541990   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000 (perms=drwx------)
	I0229 18:46:05.542006   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:46:05.542015   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 18:46:05.542026   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 18:46:05.542038   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:46:05.542069   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:46:05.542092   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 18:46:05.542103   49712 main.go:141] libmachine: (calico-387000) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:46:05.542119   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:46:05.542131   49712 main.go:141] libmachine: (calico-387000) Creating domain...
	I0229 18:46:05.542146   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:46:05.542156   49712 main.go:141] libmachine: (calico-387000) DBG | Checking permissions on dir: /home
	I0229 18:46:05.542166   49712 main.go:141] libmachine: (calico-387000) DBG | Skipping /home - not owner
	I0229 18:46:05.543306   49712 main.go:141] libmachine: (calico-387000) define libvirt domain using xml: 
	I0229 18:46:05.543331   49712 main.go:141] libmachine: (calico-387000) <domain type='kvm'>
	I0229 18:46:05.543341   49712 main.go:141] libmachine: (calico-387000)   <name>calico-387000</name>
	I0229 18:46:05.543351   49712 main.go:141] libmachine: (calico-387000)   <memory unit='MiB'>3072</memory>
	I0229 18:46:05.543361   49712 main.go:141] libmachine: (calico-387000)   <vcpu>2</vcpu>
	I0229 18:46:05.543368   49712 main.go:141] libmachine: (calico-387000)   <features>
	I0229 18:46:05.543383   49712 main.go:141] libmachine: (calico-387000)     <acpi/>
	I0229 18:46:05.543392   49712 main.go:141] libmachine: (calico-387000)     <apic/>
	I0229 18:46:05.543400   49712 main.go:141] libmachine: (calico-387000)     <pae/>
	I0229 18:46:05.543409   49712 main.go:141] libmachine: (calico-387000)     
	I0229 18:46:05.543417   49712 main.go:141] libmachine: (calico-387000)   </features>
	I0229 18:46:05.543432   49712 main.go:141] libmachine: (calico-387000)   <cpu mode='host-passthrough'>
	I0229 18:46:05.543443   49712 main.go:141] libmachine: (calico-387000)   
	I0229 18:46:05.543449   49712 main.go:141] libmachine: (calico-387000)   </cpu>
	I0229 18:46:05.543457   49712 main.go:141] libmachine: (calico-387000)   <os>
	I0229 18:46:05.543464   49712 main.go:141] libmachine: (calico-387000)     <type>hvm</type>
	I0229 18:46:05.543476   49712 main.go:141] libmachine: (calico-387000)     <boot dev='cdrom'/>
	I0229 18:46:05.543486   49712 main.go:141] libmachine: (calico-387000)     <boot dev='hd'/>
	I0229 18:46:05.543493   49712 main.go:141] libmachine: (calico-387000)     <bootmenu enable='no'/>
	I0229 18:46:05.543506   49712 main.go:141] libmachine: (calico-387000)   </os>
	I0229 18:46:05.543517   49712 main.go:141] libmachine: (calico-387000)   <devices>
	I0229 18:46:05.543534   49712 main.go:141] libmachine: (calico-387000)     <disk type='file' device='cdrom'>
	I0229 18:46:05.543553   49712 main.go:141] libmachine: (calico-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/boot2docker.iso'/>
	I0229 18:46:05.543565   49712 main.go:141] libmachine: (calico-387000)       <target dev='hdc' bus='scsi'/>
	I0229 18:46:05.543602   49712 main.go:141] libmachine: (calico-387000)       <readonly/>
	I0229 18:46:05.543630   49712 main.go:141] libmachine: (calico-387000)     </disk>
	I0229 18:46:05.543645   49712 main.go:141] libmachine: (calico-387000)     <disk type='file' device='disk'>
	I0229 18:46:05.543658   49712 main.go:141] libmachine: (calico-387000)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:46:05.543674   49712 main.go:141] libmachine: (calico-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/calico-387000.rawdisk'/>
	I0229 18:46:05.543686   49712 main.go:141] libmachine: (calico-387000)       <target dev='hda' bus='virtio'/>
	I0229 18:46:05.543698   49712 main.go:141] libmachine: (calico-387000)     </disk>
	I0229 18:46:05.543710   49712 main.go:141] libmachine: (calico-387000)     <interface type='network'>
	I0229 18:46:05.543723   49712 main.go:141] libmachine: (calico-387000)       <source network='mk-calico-387000'/>
	I0229 18:46:05.543731   49712 main.go:141] libmachine: (calico-387000)       <model type='virtio'/>
	I0229 18:46:05.543757   49712 main.go:141] libmachine: (calico-387000)     </interface>
	I0229 18:46:05.543794   49712 main.go:141] libmachine: (calico-387000)     <interface type='network'>
	I0229 18:46:05.543808   49712 main.go:141] libmachine: (calico-387000)       <source network='default'/>
	I0229 18:46:05.543819   49712 main.go:141] libmachine: (calico-387000)       <model type='virtio'/>
	I0229 18:46:05.543831   49712 main.go:141] libmachine: (calico-387000)     </interface>
	I0229 18:46:05.543838   49712 main.go:141] libmachine: (calico-387000)     <serial type='pty'>
	I0229 18:46:05.543850   49712 main.go:141] libmachine: (calico-387000)       <target port='0'/>
	I0229 18:46:05.543860   49712 main.go:141] libmachine: (calico-387000)     </serial>
	I0229 18:46:05.543868   49712 main.go:141] libmachine: (calico-387000)     <console type='pty'>
	I0229 18:46:05.543888   49712 main.go:141] libmachine: (calico-387000)       <target type='serial' port='0'/>
	I0229 18:46:05.543900   49712 main.go:141] libmachine: (calico-387000)     </console>
	I0229 18:46:05.543911   49712 main.go:141] libmachine: (calico-387000)     <rng model='virtio'>
	I0229 18:46:05.543924   49712 main.go:141] libmachine: (calico-387000)       <backend model='random'>/dev/random</backend>
	I0229 18:46:05.543938   49712 main.go:141] libmachine: (calico-387000)     </rng>
	I0229 18:46:05.543950   49712 main.go:141] libmachine: (calico-387000)     
	I0229 18:46:05.543956   49712 main.go:141] libmachine: (calico-387000)     
	I0229 18:46:05.543964   49712 main.go:141] libmachine: (calico-387000)   </devices>
	I0229 18:46:05.543970   49712 main.go:141] libmachine: (calico-387000) </domain>
	I0229 18:46:05.543980   49712 main.go:141] libmachine: (calico-387000) 
	I0229 18:46:05.548159   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:36:41:e7 in network default
	I0229 18:46:05.548795   49712 main.go:141] libmachine: (calico-387000) Ensuring networks are active...
	I0229 18:46:05.548813   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:05.549455   49712 main.go:141] libmachine: (calico-387000) Ensuring network default is active
	I0229 18:46:05.549736   49712 main.go:141] libmachine: (calico-387000) Ensuring network mk-calico-387000 is active
	I0229 18:46:05.550230   49712 main.go:141] libmachine: (calico-387000) Getting domain xml...
	I0229 18:46:05.550999   49712 main.go:141] libmachine: (calico-387000) Creating domain...
	I0229 18:46:06.811769   49712 main.go:141] libmachine: (calico-387000) Waiting to get IP...
	I0229 18:46:06.812489   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:06.812975   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:06.813006   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:06.812949   49735 retry.go:31] will retry after 232.001683ms: waiting for machine to come up
	I0229 18:46:07.046382   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:07.046929   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:07.046954   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:07.046893   49735 retry.go:31] will retry after 370.519671ms: waiting for machine to come up
	I0229 18:46:07.419428   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:07.419951   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:07.419995   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:07.419908   49735 retry.go:31] will retry after 426.919762ms: waiting for machine to come up
	I0229 18:46:07.848062   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:07.848581   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:07.848607   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:07.848541   49735 retry.go:31] will retry after 461.441394ms: waiting for machine to come up
	I0229 18:46:08.311132   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:08.311751   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:08.311780   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:08.311698   49735 retry.go:31] will retry after 704.024508ms: waiting for machine to come up
	I0229 18:46:09.016791   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:09.017309   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:09.017338   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:09.017257   49735 retry.go:31] will retry after 624.506075ms: waiting for machine to come up
	I0229 18:46:09.643340   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:09.643938   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:09.643975   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:09.643895   49735 retry.go:31] will retry after 1.129656543s: waiting for machine to come up
	I0229 18:46:06.004757   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:06.504925   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:07.004485   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:07.504835   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:08.004698   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:08.504739   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:09.003907   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:09.504736   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:10.003980   48964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:46:10.145316   48964 kubeadm.go:1088] duration metric: took 11.859889456s to wait for elevateKubeSystemPrivileges.
	I0229 18:46:10.145350   48964 kubeadm.go:406] StartCluster complete in 24.785714868s
	I0229 18:46:10.145371   48964 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:10.145451   48964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:46:10.146818   48964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:10.147092   48964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:46:10.147251   48964 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:46:10.147309   48964 config.go:182] Loaded profile config "kindnet-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:46:10.147339   48964 addons.go:69] Setting storage-provisioner=true in profile "kindnet-387000"
	I0229 18:46:10.147362   48964 addons.go:234] Setting addon storage-provisioner=true in "kindnet-387000"
	I0229 18:46:10.147370   48964 addons.go:69] Setting default-storageclass=true in profile "kindnet-387000"
	I0229 18:46:10.147393   48964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-387000"
	I0229 18:46:10.147411   48964 host.go:66] Checking if "kindnet-387000" exists ...
	I0229 18:46:10.147876   48964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:10.147875   48964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:10.147933   48964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:10.147904   48964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:10.170276   48964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0229 18:46:10.170308   48964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0229 18:46:10.170763   48964 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:10.170869   48964 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:10.171516   48964 main.go:141] libmachine: Using API Version  1
	I0229 18:46:10.171536   48964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:10.171669   48964 main.go:141] libmachine: Using API Version  1
	I0229 18:46:10.171688   48964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:10.172022   48964 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:10.172228   48964 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:10.172432   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetState
	I0229 18:46:10.172626   48964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:10.172645   48964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:10.175782   48964 addons.go:234] Setting addon default-storageclass=true in "kindnet-387000"
	I0229 18:46:10.175819   48964 host.go:66] Checking if "kindnet-387000" exists ...
	I0229 18:46:10.176243   48964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:10.176273   48964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:10.193308   48964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I0229 18:46:10.193881   48964 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:10.193954   48964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0229 18:46:10.194355   48964 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:10.194477   48964 main.go:141] libmachine: Using API Version  1
	I0229 18:46:10.194513   48964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:10.194937   48964 main.go:141] libmachine: Using API Version  1
	I0229 18:46:10.194957   48964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:10.194976   48964 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:10.195492   48964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:46:10.195524   48964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:46:10.195720   48964 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:10.195939   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetState
	I0229 18:46:10.197884   48964 main.go:141] libmachine: (kindnet-387000) Calling .DriverName
	I0229 18:46:10.199601   48964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:46:10.200946   48964 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:46:10.200965   48964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:46:10.200983   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHHostname
	I0229 18:46:10.204640   48964 main.go:141] libmachine: (kindnet-387000) DBG | domain kindnet-387000 has defined MAC address 52:54:00:da:10:03 in network mk-kindnet-387000
	I0229 18:46:10.205121   48964 main.go:141] libmachine: (kindnet-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:03", ip: ""} in network mk-kindnet-387000: {Iface:virbr2 ExpiryTime:2024-02-29 19:45:27 +0000 UTC Type:0 Mac:52:54:00:da:10:03 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kindnet-387000 Clientid:01:52:54:00:da:10:03}
	I0229 18:46:10.205159   48964 main.go:141] libmachine: (kindnet-387000) DBG | domain kindnet-387000 has defined IP address 192.168.61.182 and MAC address 52:54:00:da:10:03 in network mk-kindnet-387000
	I0229 18:46:10.205374   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHPort
	I0229 18:46:10.205567   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHKeyPath
	I0229 18:46:10.205698   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHUsername
	I0229 18:46:10.205883   48964 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kindnet-387000/id_rsa Username:docker}
	I0229 18:46:10.213503   48964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
	I0229 18:46:10.213867   48964 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:46:10.214346   48964 main.go:141] libmachine: Using API Version  1
	I0229 18:46:10.214358   48964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:46:10.214737   48964 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:46:10.214916   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetState
	I0229 18:46:10.216779   48964 main.go:141] libmachine: (kindnet-387000) Calling .DriverName
	I0229 18:46:10.217006   48964 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:46:10.217016   48964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:46:10.217027   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHHostname
	I0229 18:46:10.220667   48964 main.go:141] libmachine: (kindnet-387000) DBG | domain kindnet-387000 has defined MAC address 52:54:00:da:10:03 in network mk-kindnet-387000
	I0229 18:46:10.221125   48964 main.go:141] libmachine: (kindnet-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:03", ip: ""} in network mk-kindnet-387000: {Iface:virbr2 ExpiryTime:2024-02-29 19:45:27 +0000 UTC Type:0 Mac:52:54:00:da:10:03 Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kindnet-387000 Clientid:01:52:54:00:da:10:03}
	I0229 18:46:10.221145   48964 main.go:141] libmachine: (kindnet-387000) DBG | domain kindnet-387000 has defined IP address 192.168.61.182 and MAC address 52:54:00:da:10:03 in network mk-kindnet-387000
	I0229 18:46:10.221349   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHPort
	I0229 18:46:10.221526   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHKeyPath
	I0229 18:46:10.221679   48964 main.go:141] libmachine: (kindnet-387000) Calling .GetSSHUsername
	I0229 18:46:10.221795   48964 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/kindnet-387000/id_rsa Username:docker}
	I0229 18:46:10.402092   48964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:46:10.407124   48964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:46:10.516915   48964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:46:10.652180   48964 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-387000" context rescaled to 1 replicas
	I0229 18:46:10.652211   48964 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:46:10.653739   48964 out.go:177] * Verifying Kubernetes components...
	I0229 18:46:10.655129   48964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:11.441089   48964 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.038944412s)
	I0229 18:46:11.441124   48964 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 18:46:11.668216   48964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.151259466s)
	I0229 18:46:11.668269   48964 main.go:141] libmachine: Making call to close driver server
	I0229 18:46:11.668273   48964 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.013114574s)
	I0229 18:46:11.668356   48964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261199731s)
	I0229 18:46:11.668392   48964 main.go:141] libmachine: Making call to close driver server
	I0229 18:46:11.668282   48964 main.go:141] libmachine: (kindnet-387000) Calling .Close
	I0229 18:46:11.668414   48964 main.go:141] libmachine: (kindnet-387000) Calling .Close
	I0229 18:46:11.668726   48964 main.go:141] libmachine: (kindnet-387000) DBG | Closing plugin on server side
	I0229 18:46:11.668781   48964 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:46:11.668792   48964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:46:11.668802   48964 main.go:141] libmachine: Making call to close driver server
	I0229 18:46:11.668823   48964 main.go:141] libmachine: (kindnet-387000) Calling .Close
	I0229 18:46:11.669072   48964 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:46:11.669088   48964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:46:11.669727   48964 node_ready.go:35] waiting up to 15m0s for node "kindnet-387000" to be "Ready" ...
	I0229 18:46:11.670188   48964 main.go:141] libmachine: (kindnet-387000) DBG | Closing plugin on server side
	I0229 18:46:11.670203   48964 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:46:11.670255   48964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:46:11.670272   48964 main.go:141] libmachine: Making call to close driver server
	I0229 18:46:11.670279   48964 main.go:141] libmachine: (kindnet-387000) Calling .Close
	I0229 18:46:11.670651   48964 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:46:11.670688   48964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:46:11.670689   48964 main.go:141] libmachine: (kindnet-387000) DBG | Closing plugin on server side
	I0229 18:46:11.684501   48964 main.go:141] libmachine: Making call to close driver server
	I0229 18:46:11.684515   48964 main.go:141] libmachine: (kindnet-387000) Calling .Close
	I0229 18:46:11.684794   48964 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:46:11.684815   48964 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:46:11.687262   48964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:46:10.358061   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:12.358272   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:10.774815   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:10.775439   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:10.775465   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:10.775411   49735 retry.go:31] will retry after 1.329051777s: waiting for machine to come up
	I0229 18:46:12.106864   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:12.107338   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:12.107362   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:12.107292   49735 retry.go:31] will retry after 1.475415651s: waiting for machine to come up
	I0229 18:46:13.584000   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:13.584620   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:13.584656   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:13.584580   49735 retry.go:31] will retry after 2.007264326s: waiting for machine to come up
	I0229 18:46:11.688608   48964 addons.go:505] enable addons completed in 1.541354874s: enabled=[storage-provisioner default-storageclass]
	I0229 18:46:13.673228   48964 node_ready.go:58] node "kindnet-387000" has status "Ready":"False"
	I0229 18:46:14.358359   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:16.858919   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:15.675003   48964 node_ready.go:58] node "kindnet-387000" has status "Ready":"False"
	I0229 18:46:16.673769   48964 node_ready.go:49] node "kindnet-387000" has status "Ready":"True"
	I0229 18:46:16.673795   48964 node_ready.go:38] duration metric: took 5.004044676s waiting for node "kindnet-387000" to be "Ready" ...
	I0229 18:46:16.673803   48964 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:46:16.681694   48964 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-gg4dg" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.691578   48964 pod_ready.go:92] pod "coredns-5dd5756b68-gg4dg" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:18.691605   48964 pod_ready.go:81] duration metric: took 2.009888122s waiting for pod "coredns-5dd5756b68-gg4dg" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.691618   48964 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.700273   48964 pod_ready.go:92] pod "etcd-kindnet-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:18.700303   48964 pod_ready.go:81] duration metric: took 8.676724ms waiting for pod "etcd-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.700320   48964 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.707055   48964 pod_ready.go:92] pod "kube-apiserver-kindnet-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:18.707081   48964 pod_ready.go:81] duration metric: took 6.751182ms waiting for pod "kube-apiserver-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.707098   48964 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.717635   48964 pod_ready.go:92] pod "kube-controller-manager-kindnet-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:18.717661   48964 pod_ready.go:81] duration metric: took 10.553572ms waiting for pod "kube-controller-manager-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.717674   48964 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-498rl" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.724769   48964 pod_ready.go:92] pod "kube-proxy-498rl" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:18.724789   48964 pod_ready.go:81] duration metric: took 7.108826ms waiting for pod "kube-proxy-498rl" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:18.724797   48964 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:19.086118   48964 pod_ready.go:92] pod "kube-scheduler-kindnet-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:19.086140   48964 pod_ready.go:81] duration metric: took 361.337492ms waiting for pod "kube-scheduler-kindnet-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:19.086150   48964 pod_ready.go:38] duration metric: took 2.412327275s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:46:19.086163   48964 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:46:19.086208   48964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:46:19.103587   48964 api_server.go:72] duration metric: took 8.451350741s to wait for apiserver process to appear ...
	I0229 18:46:19.103617   48964 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:46:19.103637   48964 api_server.go:253] Checking apiserver healthz at https://192.168.61.182:8443/healthz ...
	I0229 18:46:19.108569   48964 api_server.go:279] https://192.168.61.182:8443/healthz returned 200:
	ok
	I0229 18:46:19.110109   48964 api_server.go:141] control plane version: v1.28.4
	I0229 18:46:19.110130   48964 api_server.go:131] duration metric: took 6.506256ms to wait for apiserver health ...
	I0229 18:46:19.110137   48964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:46:19.289354   48964 system_pods.go:59] 8 kube-system pods found
	I0229 18:46:19.289387   48964 system_pods.go:61] "coredns-5dd5756b68-gg4dg" [771e75dc-ab1c-4aa2-b7f8-0dddbb96c4f7] Running
	I0229 18:46:19.289394   48964 system_pods.go:61] "etcd-kindnet-387000" [516c4429-9d02-400b-b756-3624b720894c] Running
	I0229 18:46:19.289399   48964 system_pods.go:61] "kindnet-wrpbq" [4446d867-ccaf-4a2c-94be-2d35c7860e46] Running
	I0229 18:46:19.289404   48964 system_pods.go:61] "kube-apiserver-kindnet-387000" [690e816b-5233-41aa-8431-be8231a0f1c6] Running
	I0229 18:46:19.289409   48964 system_pods.go:61] "kube-controller-manager-kindnet-387000" [3d387f3a-ffd8-4449-b48f-09ce81188bff] Running
	I0229 18:46:19.289414   48964 system_pods.go:61] "kube-proxy-498rl" [daf5340f-642b-4b5f-9b7f-19f24c2ee539] Running
	I0229 18:46:19.289416   48964 system_pods.go:61] "kube-scheduler-kindnet-387000" [2cb7463a-2774-4f84-a7bd-86b479972876] Running
	I0229 18:46:19.289419   48964 system_pods.go:61] "storage-provisioner" [5b3491e6-1794-4e9d-a03f-3ec33d0ce9f0] Running
	I0229 18:46:19.289426   48964 system_pods.go:74] duration metric: took 179.283128ms to wait for pod list to return data ...
	I0229 18:46:19.289435   48964 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:46:19.486073   48964 default_sa.go:45] found service account: "default"
	I0229 18:46:19.486097   48964 default_sa.go:55] duration metric: took 196.648236ms for default service account to be created ...
	I0229 18:46:19.486106   48964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:46:19.692602   48964 system_pods.go:86] 8 kube-system pods found
	I0229 18:46:19.692638   48964 system_pods.go:89] "coredns-5dd5756b68-gg4dg" [771e75dc-ab1c-4aa2-b7f8-0dddbb96c4f7] Running
	I0229 18:46:19.692645   48964 system_pods.go:89] "etcd-kindnet-387000" [516c4429-9d02-400b-b756-3624b720894c] Running
	I0229 18:46:19.692650   48964 system_pods.go:89] "kindnet-wrpbq" [4446d867-ccaf-4a2c-94be-2d35c7860e46] Running
	I0229 18:46:19.692654   48964 system_pods.go:89] "kube-apiserver-kindnet-387000" [690e816b-5233-41aa-8431-be8231a0f1c6] Running
	I0229 18:46:19.692659   48964 system_pods.go:89] "kube-controller-manager-kindnet-387000" [3d387f3a-ffd8-4449-b48f-09ce81188bff] Running
	I0229 18:46:19.692669   48964 system_pods.go:89] "kube-proxy-498rl" [daf5340f-642b-4b5f-9b7f-19f24c2ee539] Running
	I0229 18:46:19.692677   48964 system_pods.go:89] "kube-scheduler-kindnet-387000" [2cb7463a-2774-4f84-a7bd-86b479972876] Running
	I0229 18:46:19.692683   48964 system_pods.go:89] "storage-provisioner" [5b3491e6-1794-4e9d-a03f-3ec33d0ce9f0] Running
	I0229 18:46:19.692694   48964 system_pods.go:126] duration metric: took 206.582069ms to wait for k8s-apps to be running ...
	I0229 18:46:19.692712   48964 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:46:19.692762   48964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:19.711223   48964 system_svc.go:56] duration metric: took 18.501832ms WaitForService to wait for kubelet.
	I0229 18:46:19.711255   48964 kubeadm.go:581] duration metric: took 9.059020985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:46:19.711278   48964 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:46:19.886028   48964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:46:19.886073   48964 node_conditions.go:123] node cpu capacity is 2
	I0229 18:46:19.886098   48964 node_conditions.go:105] duration metric: took 174.80251ms to run NodePressure ...
	I0229 18:46:19.886113   48964 start.go:228] waiting for startup goroutines ...
	I0229 18:46:19.886131   48964 start.go:233] waiting for cluster config update ...
	I0229 18:46:19.886142   48964 start.go:242] writing updated cluster config ...
	I0229 18:46:19.886416   48964 ssh_runner.go:195] Run: rm -f paused
	I0229 18:46:19.933739   48964 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:46:19.935603   48964 out.go:177] * Done! kubectl is now configured to use "kindnet-387000" cluster and "default" namespace by default
	I0229 18:46:15.593214   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:15.593692   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:15.593761   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:15.593665   49735 retry.go:31] will retry after 1.788637221s: waiting for machine to come up
	I0229 18:46:17.384355   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:17.385015   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:17.385037   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:17.384928   49735 retry.go:31] will retry after 2.237757616s: waiting for machine to come up
	I0229 18:46:19.623776   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:19.624266   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:19.624298   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:19.624244   49735 retry.go:31] will retry after 4.142959715s: waiting for machine to come up
	I0229 18:46:19.358495   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:21.857481   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:23.771521   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:23.771951   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find current IP address of domain calico-387000 in network mk-calico-387000
	I0229 18:46:23.771975   49712 main.go:141] libmachine: (calico-387000) DBG | I0229 18:46:23.771913   49735 retry.go:31] will retry after 3.934296571s: waiting for machine to come up
	I0229 18:46:24.359893   48239 pod_ready.go:102] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:25.858328   48239 pod_ready.go:92] pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:25.858352   48239 pod_ready.go:81] duration metric: took 40.007872945s waiting for pod "coredns-5dd5756b68-4hd4t" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.858363   48239 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-jsltr" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.863734   48239 pod_ready.go:92] pod "coredns-5dd5756b68-jsltr" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:25.863753   48239 pod_ready.go:81] duration metric: took 5.38216ms waiting for pod "coredns-5dd5756b68-jsltr" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.863761   48239 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.868321   48239 pod_ready.go:92] pod "etcd-auto-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:25.868336   48239 pod_ready.go:81] duration metric: took 4.570649ms waiting for pod "etcd-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.868344   48239 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.873099   48239 pod_ready.go:92] pod "kube-apiserver-auto-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:25.873118   48239 pod_ready.go:81] duration metric: took 4.768503ms waiting for pod "kube-apiserver-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.873129   48239 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.878383   48239 pod_ready.go:92] pod "kube-controller-manager-auto-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:25.878398   48239 pod_ready.go:81] duration metric: took 5.264335ms waiting for pod "kube-controller-manager-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:25.878406   48239 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-bj7zq" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:26.256112   48239 pod_ready.go:92] pod "kube-proxy-bj7zq" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:26.256146   48239 pod_ready.go:81] duration metric: took 377.733033ms waiting for pod "kube-proxy-bj7zq" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:26.256159   48239 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:26.655281   48239 pod_ready.go:92] pod "kube-scheduler-auto-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:46:26.655305   48239 pod_ready.go:81] duration metric: took 399.137228ms waiting for pod "kube-scheduler-auto-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:46:26.655319   48239 pod_ready.go:38] duration metric: took 40.836135359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:46:26.655335   48239 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:46:26.655388   48239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:46:26.672864   48239 api_server.go:72] duration metric: took 41.223510028s to wait for apiserver process to appear ...
	I0229 18:46:26.672890   48239 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:46:26.672910   48239 api_server.go:253] Checking apiserver healthz at https://192.168.72.173:8443/healthz ...
	I0229 18:46:26.677779   48239 api_server.go:279] https://192.168.72.173:8443/healthz returned 200:
	ok
	I0229 18:46:26.679105   48239 api_server.go:141] control plane version: v1.28.4
	I0229 18:46:26.679129   48239 api_server.go:131] duration metric: took 6.232194ms to wait for apiserver health ...
	I0229 18:46:26.679139   48239 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:46:26.859829   48239 system_pods.go:59] 8 kube-system pods found
	I0229 18:46:26.859857   48239 system_pods.go:61] "coredns-5dd5756b68-4hd4t" [1df26c69-c689-4ce2-b43a-1bf1638c9153] Running
	I0229 18:46:26.859861   48239 system_pods.go:61] "coredns-5dd5756b68-jsltr" [3a90112c-b9d1-447e-9d50-2a04793856fd] Running
	I0229 18:46:26.859864   48239 system_pods.go:61] "etcd-auto-387000" [f574ba14-30f2-4981-b618-e464d4d0941b] Running
	I0229 18:46:26.859867   48239 system_pods.go:61] "kube-apiserver-auto-387000" [25c5b35d-b5b5-46bf-a7af-2bd4c9e6736a] Running
	I0229 18:46:26.859871   48239 system_pods.go:61] "kube-controller-manager-auto-387000" [30f53362-7a31-43cb-a610-05f0233b53ad] Running
	I0229 18:46:26.859873   48239 system_pods.go:61] "kube-proxy-bj7zq" [64817574-2b45-485c-8801-18bf0336a0ed] Running
	I0229 18:46:26.859876   48239 system_pods.go:61] "kube-scheduler-auto-387000" [716b5bb7-efae-4059-82ae-56be70795d77] Running
	I0229 18:46:26.859878   48239 system_pods.go:61] "storage-provisioner" [35dc20a1-5214-4042-8fd8-f53e9591f6a5] Running
	I0229 18:46:26.859883   48239 system_pods.go:74] duration metric: took 180.738519ms to wait for pod list to return data ...
	I0229 18:46:26.859890   48239 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:46:27.055315   48239 default_sa.go:45] found service account: "default"
	I0229 18:46:27.055341   48239 default_sa.go:55] duration metric: took 195.44646ms for default service account to be created ...
	I0229 18:46:27.055352   48239 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:46:27.259574   48239 system_pods.go:86] 8 kube-system pods found
	I0229 18:46:27.259600   48239 system_pods.go:89] "coredns-5dd5756b68-4hd4t" [1df26c69-c689-4ce2-b43a-1bf1638c9153] Running
	I0229 18:46:27.259607   48239 system_pods.go:89] "coredns-5dd5756b68-jsltr" [3a90112c-b9d1-447e-9d50-2a04793856fd] Running
	I0229 18:46:27.259611   48239 system_pods.go:89] "etcd-auto-387000" [f574ba14-30f2-4981-b618-e464d4d0941b] Running
	I0229 18:46:27.259615   48239 system_pods.go:89] "kube-apiserver-auto-387000" [25c5b35d-b5b5-46bf-a7af-2bd4c9e6736a] Running
	I0229 18:46:27.259621   48239 system_pods.go:89] "kube-controller-manager-auto-387000" [30f53362-7a31-43cb-a610-05f0233b53ad] Running
	I0229 18:46:27.259627   48239 system_pods.go:89] "kube-proxy-bj7zq" [64817574-2b45-485c-8801-18bf0336a0ed] Running
	I0229 18:46:27.259633   48239 system_pods.go:89] "kube-scheduler-auto-387000" [716b5bb7-efae-4059-82ae-56be70795d77] Running
	I0229 18:46:27.259638   48239 system_pods.go:89] "storage-provisioner" [35dc20a1-5214-4042-8fd8-f53e9591f6a5] Running
	I0229 18:46:27.259647   48239 system_pods.go:126] duration metric: took 204.290052ms to wait for k8s-apps to be running ...
	I0229 18:46:27.259662   48239 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:46:27.259710   48239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:27.276595   48239 system_svc.go:56] duration metric: took 16.913142ms WaitForService to wait for kubelet.
	I0229 18:46:27.276621   48239 kubeadm.go:581] duration metric: took 41.827271023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:46:27.276642   48239 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:46:27.456802   48239 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:46:27.456829   48239 node_conditions.go:123] node cpu capacity is 2
	I0229 18:46:27.456840   48239 node_conditions.go:105] duration metric: took 180.192907ms to run NodePressure ...
	I0229 18:46:27.456850   48239 start.go:228] waiting for startup goroutines ...
	I0229 18:46:27.456857   48239 start.go:233] waiting for cluster config update ...
	I0229 18:46:27.456877   48239 start.go:242] writing updated cluster config ...
	I0229 18:46:27.457190   48239 ssh_runner.go:195] Run: rm -f paused
	I0229 18:46:27.526069   48239 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:46:27.528436   48239 out.go:177] * Done! kubectl is now configured to use "auto-387000" cluster and "default" namespace by default
	I0229 18:46:27.707877   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.708337   49712 main.go:141] libmachine: (calico-387000) Found IP for machine: 192.168.50.188
	I0229 18:46:27.708563   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has current primary IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.708934   49712 main.go:141] libmachine: (calico-387000) Reserving static IP address...
	I0229 18:46:27.709859   49712 main.go:141] libmachine: (calico-387000) DBG | unable to find host DHCP lease matching {name: "calico-387000", mac: "52:54:00:b6:e8:ad", ip: "192.168.50.188"} in network mk-calico-387000
	I0229 18:46:27.788216   49712 main.go:141] libmachine: (calico-387000) DBG | Getting to WaitForSSH function...
	I0229 18:46:27.788256   49712 main.go:141] libmachine: (calico-387000) Reserved static IP address: 192.168.50.188
	I0229 18:46:27.788270   49712 main.go:141] libmachine: (calico-387000) Waiting for SSH to be available...
	I0229 18:46:27.790536   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.790879   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:27.790909   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.791170   49712 main.go:141] libmachine: (calico-387000) DBG | Using SSH client type: external
	I0229 18:46:27.791202   49712 main.go:141] libmachine: (calico-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa (-rw-------)
	I0229 18:46:27.791237   49712 main.go:141] libmachine: (calico-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:46:27.791252   49712 main.go:141] libmachine: (calico-387000) DBG | About to run SSH command:
	I0229 18:46:27.791265   49712 main.go:141] libmachine: (calico-387000) DBG | exit 0
	I0229 18:46:27.924807   49712 main.go:141] libmachine: (calico-387000) DBG | SSH cmd err, output: <nil>: 
	I0229 18:46:27.925041   49712 main.go:141] libmachine: (calico-387000) KVM machine creation complete!
	I0229 18:46:27.925408   49712 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I0229 18:46:27.925950   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:27.926155   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:27.926315   49712 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:46:27.926332   49712 main.go:141] libmachine: (calico-387000) Calling .GetState
	I0229 18:46:27.927859   49712 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:46:27.927875   49712 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:46:27.927882   49712 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:46:27.927891   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:27.932612   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.932988   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:27.933015   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:27.933200   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:27.933399   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:27.933558   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:27.933681   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:27.933845   49712 main.go:141] libmachine: Using SSH client type: native
	I0229 18:46:27.934111   49712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I0229 18:46:27.934129   49712 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:46:28.046836   49712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:46:28.046872   49712 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:46:28.046884   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.049571   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.049892   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.049922   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.050041   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.050213   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.050358   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.050483   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.050664   49712 main.go:141] libmachine: Using SSH client type: native
	I0229 18:46:28.050861   49712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I0229 18:46:28.050875   49712 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:46:28.156078   49712 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:46:28.156148   49712 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:46:28.156162   49712 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:46:28.156175   49712 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I0229 18:46:28.156413   49712 buildroot.go:166] provisioning hostname "calico-387000"
	I0229 18:46:28.156440   49712 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I0229 18:46:28.156611   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.160263   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.160725   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.160762   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.160841   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.161108   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.161283   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.161521   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.161740   49712 main.go:141] libmachine: Using SSH client type: native
	I0229 18:46:28.162011   49712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I0229 18:46:28.162038   49712 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-387000 && echo "calico-387000" | sudo tee /etc/hostname
	I0229 18:46:28.290395   49712 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387000
	
	I0229 18:46:28.290437   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.293531   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.293883   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.293921   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.294124   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.294280   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.294402   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.294506   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.294747   49712 main.go:141] libmachine: Using SSH client type: native
	I0229 18:46:28.294909   49712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I0229 18:46:28.294925   49712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:46:28.422197   49712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:46:28.422228   49712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:46:28.422280   49712 buildroot.go:174] setting up certificates
	I0229 18:46:28.422298   49712 provision.go:83] configureAuth start
	I0229 18:46:28.422314   49712 main.go:141] libmachine: (calico-387000) Calling .GetMachineName
	I0229 18:46:28.422602   49712 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I0229 18:46:28.425802   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.426138   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.426159   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.426425   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.429063   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.429508   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.429580   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.429794   49712 provision.go:138] copyHostCerts
	I0229 18:46:28.429850   49712 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:46:28.429871   49712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:46:28.429962   49712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:46:28.430090   49712 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:46:28.430107   49712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:46:28.430147   49712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:46:28.430228   49712 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:46:28.430238   49712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:46:28.430270   49712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:46:28.430381   49712 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.calico-387000 san=[192.168.50.188 192.168.50.188 localhost 127.0.0.1 minikube calico-387000]
	I0229 18:46:28.594893   49712 provision.go:172] copyRemoteCerts
	I0229 18:46:28.594943   49712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:46:28.594966   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.597654   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.598047   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.598084   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.598214   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.598445   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.598628   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.598768   49712 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa Username:docker}
	I0229 18:46:28.690528   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 18:46:28.717384   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:46:28.747473   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:46:28.777627   49712 provision.go:86] duration metric: configureAuth took 355.31235ms
	I0229 18:46:28.777658   49712 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:46:28.777877   49712 config.go:182] Loaded profile config "calico-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:46:28.777906   49712 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:46:28.777920   49712 main.go:141] libmachine: (calico-387000) Calling .GetURL
	I0229 18:46:28.779244   49712 main.go:141] libmachine: (calico-387000) DBG | Using libvirt version 6000000
	I0229 18:46:28.781781   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.782218   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.782264   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.782475   49712 main.go:141] libmachine: Docker is up and running!
	I0229 18:46:28.782504   49712 main.go:141] libmachine: Reticulating splines...
	I0229 18:46:28.782535   49712 client.go:171] LocalClient.Create took 23.686717085s
	I0229 18:46:28.782580   49712 start.go:167] duration metric: libmachine.API.Create for "calico-387000" took 23.686815376s
	I0229 18:46:28.782594   49712 start.go:300] post-start starting for "calico-387000" (driver="kvm2")
	I0229 18:46:28.782609   49712 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:46:28.782635   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:28.782936   49712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:46:28.782966   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.785638   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.785985   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.786017   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.786158   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.786347   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.786512   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.786683   49712 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa Username:docker}
	I0229 18:46:28.874318   49712 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:46:28.881092   49712 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:46:28.881123   49712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:46:28.881209   49712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:46:28.881301   49712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:46:28.881455   49712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:46:28.894133   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:46:28.929284   49712 start.go:303] post-start completed in 146.66588ms
	I0229 18:46:28.929357   49712 main.go:141] libmachine: (calico-387000) Calling .GetConfigRaw
	I0229 18:46:28.930182   49712 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I0229 18:46:28.933134   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.933542   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.933575   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.933835   49712 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/config.json ...
	I0229 18:46:28.934014   49712 start.go:128] duration metric: createHost completed in 23.856090195s
	I0229 18:46:28.934040   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:28.936268   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.936655   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:28.936676   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:28.936816   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:28.936988   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.937161   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:28.937306   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:28.937582   49712 main.go:141] libmachine: Using SSH client type: native
	I0229 18:46:28.937818   49712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I0229 18:46:28.937835   49712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:46:29.045282   49712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232389.011954859
	
	I0229 18:46:29.045310   49712 fix.go:206] guest clock: 1709232389.011954859
	I0229 18:46:29.045320   49712 fix.go:219] Guest: 2024-02-29 18:46:29.011954859 +0000 UTC Remote: 2024-02-29 18:46:28.934025947 +0000 UTC m=+23.974278882 (delta=77.928912ms)
	I0229 18:46:29.045366   49712 fix.go:190] guest clock delta is within tolerance: 77.928912ms
	I0229 18:46:29.045373   49712 start.go:83] releasing machines lock for "calico-387000", held for 23.967560669s
	I0229 18:46:29.045400   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:29.045655   49712 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I0229 18:46:29.048684   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.049105   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:29.049130   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.049268   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:29.049799   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:29.049967   49712 main.go:141] libmachine: (calico-387000) Calling .DriverName
	I0229 18:46:29.050052   49712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:46:29.050097   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:29.050196   49712 ssh_runner.go:195] Run: cat /version.json
	I0229 18:46:29.050248   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHHostname
	I0229 18:46:29.052904   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.052956   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.053293   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:29.053335   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.053398   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:29.053414   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:29.053698   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:29.053722   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHPort
	I0229 18:46:29.053851   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:29.053896   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHKeyPath
	I0229 18:46:29.053977   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:29.054055   49712 main.go:141] libmachine: (calico-387000) Calling .GetSSHUsername
	I0229 18:46:29.054124   49712 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa Username:docker}
	I0229 18:46:29.054234   49712 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/calico-387000/id_rsa Username:docker}
	I0229 18:46:29.134054   49712 ssh_runner.go:195] Run: systemctl --version
	I0229 18:46:29.164146   49712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:46:29.172284   49712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:46:29.172355   49712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:46:29.197751   49712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:46:29.197777   49712 start.go:475] detecting cgroup driver to use...
	I0229 18:46:29.197849   49712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:46:29.248113   49712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:46:29.268237   49712 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:46:29.268304   49712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:46:29.288826   49712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:46:29.308370   49712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:46:29.460653   49712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:46:29.616744   49712 docker.go:233] disabling docker service ...
	I0229 18:46:29.616816   49712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:46:29.635664   49712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:46:29.650745   49712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:46:29.788435   49712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:46:29.949390   49712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:46:29.971064   49712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:46:29.996899   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:46:30.010217   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:46:30.027569   49712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:46:30.027640   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:46:30.040039   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:46:30.051773   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:46:30.063368   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:46:30.075096   49712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:46:30.090886   49712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:46:30.102675   49712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:46:30.116340   49712 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:46:30.116397   49712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:46:30.132181   49712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:46:30.143231   49712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:46:30.273125   49712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:46:30.308534   49712 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:46:30.308609   49712 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:46:30.316242   49712 retry.go:31] will retry after 776.824419ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:46:31.094105   49712 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:46:31.102010   49712 start.go:543] Will wait 60s for crictl version
	I0229 18:46:31.102072   49712 ssh_runner.go:195] Run: which crictl
	I0229 18:46:31.107932   49712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:46:31.160329   49712 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:46:31.160400   49712 ssh_runner.go:195] Run: containerd --version
	I0229 18:46:31.196223   49712 ssh_runner.go:195] Run: containerd --version
	I0229 18:46:31.233390   49712 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 18:46:31.234749   49712 main.go:141] libmachine: (calico-387000) Calling .GetIP
	I0229 18:46:31.237549   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:31.237888   49712 main.go:141] libmachine: (calico-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:e8:ad", ip: ""} in network mk-calico-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:46:21 +0000 UTC Type:0 Mac:52:54:00:b6:e8:ad Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:calico-387000 Clientid:01:52:54:00:b6:e8:ad}
	I0229 18:46:31.237915   49712 main.go:141] libmachine: (calico-387000) DBG | domain calico-387000 has defined IP address 192.168.50.188 and MAC address 52:54:00:b6:e8:ad in network mk-calico-387000
	I0229 18:46:31.238073   49712 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:46:31.243012   49712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:46:31.261306   49712 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:46:31.261385   49712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:46:31.306274   49712 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:46:31.306334   49712 ssh_runner.go:195] Run: which lz4
	I0229 18:46:31.311125   49712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:46:31.315942   49712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:46:31.315975   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 18:46:33.289012   49712 containerd.go:548] Took 1.977922 seconds to copy over tarball
	I0229 18:46:33.289077   49712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:46:36.165308   49712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.876199284s)
	I0229 18:46:36.165334   49712 containerd.go:555] Took 2.876301 seconds to extract the tarball
	I0229 18:46:36.165358   49712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:46:36.211562   49712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:46:36.330952   49712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:46:36.358979   49712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:46:36.401485   49712 retry.go:31] will retry after 230.311185ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:46:36Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:46:36.632986   49712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:46:36.675846   49712 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:46:36.675870   49712 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:46:36.675931   49712 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:46:36.717162   49712 cni.go:84] Creating CNI manager for "calico"
	I0229 18:46:36.717204   49712 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:46:36.717228   49712 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.188 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-387000 NodeName:calico-387000 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:46:36.717400   49712 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-387000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:46:36.717487   49712 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0229 18:46:36.717556   49712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:46:36.732644   49712 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:46:36.732727   49712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:46:36.745217   49712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0229 18:46:36.765254   49712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:46:36.785377   49712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0229 18:46:36.804628   49712 ssh_runner.go:195] Run: grep 192.168.50.188	control-plane.minikube.internal$ /etc/hosts
	I0229 18:46:36.809074   49712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:46:36.823992   49712 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000 for IP: 192.168.50.188
	I0229 18:46:36.824024   49712 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:36.824164   49712 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:46:36.824200   49712 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:46:36.824264   49712 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.key
	I0229 18:46:36.824285   49712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt with IP's: []
	I0229 18:46:36.931548   49712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt ...
	I0229 18:46:36.931578   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: {Name:mkfd566aa556a7d872819e744b785cb4f55db721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:36.931731   49712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.key ...
	I0229 18:46:36.931743   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.key: {Name:mkb73af7452225bb2682b03eb42b65725d002bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:36.931812   49712 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key.4b25dced
	I0229 18:46:36.931826   49712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt.4b25dced with IP's: [192.168.50.188 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:46:37.192153   49712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt.4b25dced ...
	I0229 18:46:37.192186   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt.4b25dced: {Name:mk5674e9432f4e9cd9dbacc9d677c043b4fe34e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:37.192356   49712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key.4b25dced ...
	I0229 18:46:37.192376   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key.4b25dced: {Name:mk58d1f0f1ccf0fb11f73b9eb45509a5cab9e424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:37.192492   49712 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt.4b25dced -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt
	I0229 18:46:37.192589   49712 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key.4b25dced -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key
	I0229 18:46:37.192641   49712 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.key
	I0229 18:46:37.192655   49712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.crt with IP's: []
	I0229 18:46:37.331351   49712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.crt ...
	I0229 18:46:37.331375   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.crt: {Name:mk6ec9858cf9aafa6cb1408c04ae9860a6ba9ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:37.331508   49712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.key ...
	I0229 18:46:37.331521   49712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.key: {Name:mkbaccdaeeb4d173505a728ada06da0b11deaa18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:46:37.331678   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:46:37.331710   49712 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:46:37.331721   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:46:37.331778   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:46:37.331804   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:46:37.331827   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:46:37.331871   49712 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:46:37.332440   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:46:37.364051   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:46:37.433330   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:46:37.467348   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:46:37.498648   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:46:37.525477   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:46:37.552142   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:46:37.579992   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:46:37.606978   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:46:37.636108   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:46:37.662851   49712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:46:37.690645   49712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:46:37.709320   49712 ssh_runner.go:195] Run: openssl version
	I0229 18:46:37.715529   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:46:37.728388   49712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:46:37.733541   49712 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:46:37.733592   49712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:46:37.740269   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:46:37.753885   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:46:37.768377   49712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:46:37.773890   49712 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:46:37.773934   49712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:46:37.780431   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:46:37.794357   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:46:37.807908   49712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:46:37.813190   49712 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:46:37.813234   49712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:46:37.819936   49712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:46:37.834528   49712 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:46:37.839240   49712 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:46:37.839298   49712 kubeadm.go:404] StartCluster: {Name:calico-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:calico-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.188 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:46:37.839387   49712 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:46:37.839437   49712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:46:37.882429   49712 cri.go:89] found id: ""
	I0229 18:46:37.882509   49712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:46:37.894760   49712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:46:37.907038   49712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:46:37.919226   49712 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:46:37.919267   49712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:46:37.975254   49712 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:46:37.975323   49712 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:38.147579   49712 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:38.147742   49712 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:38.147879   49712 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:38.406046   49712 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:38.567213   49712 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:38.567362   49712 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:38.567475   49712 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:38.732003   49712 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:46:38.813802   49712 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:46:39.310748   49712 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:46:39.445140   49712 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:46:39.592309   49712 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:46:39.592495   49712 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-387000 localhost] and IPs [192.168.50.188 127.0.0.1 ::1]
	I0229 18:46:39.696271   49712 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:46:39.696476   49712 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-387000 localhost] and IPs [192.168.50.188 127.0.0.1 ::1]
	I0229 18:46:39.889609   49712 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:46:40.052195   49712 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:46:40.381298   49712 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:46:40.386056   49712 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:40.758207   49712 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:40.923929   49712 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:41.164427   49712 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:41.318210   49712 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:41.319055   49712 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:41.323316   49712 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:41.851807   45244 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:41.851974   45244 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:46:41.853689   45244 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:41.853746   45244 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:41.853843   45244 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:41.853991   45244 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:41.854132   45244 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:41.854295   45244 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:41.854409   45244 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:41.854495   45244 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:41.854606   45244 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:41.856466   45244 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:41.856560   45244 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:41.856653   45244 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:41.856765   45244 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:41.856861   45244 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:41.856967   45244 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:41.857052   45244 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:41.857135   45244 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:41.857209   45244 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:41.857290   45244 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:41.857381   45244 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:41.857441   45244 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:41.857523   45244 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:41.857606   45244 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:41.857699   45244 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:41.857777   45244 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:41.857827   45244 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:41.857886   45244 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:41.859219   45244 out.go:204]   - Booting up control plane ...
	I0229 18:46:41.859312   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:41.859400   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:41.859458   45244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:41.859547   45244 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:41.859727   45244 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:46:41.859795   45244 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:46:41.859869   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860155   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860236   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860476   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860583   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.860796   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.860896   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.861109   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.861212   45244 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:41.861426   45244 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:41.861438   45244 kubeadm.go:322] 
	I0229 18:46:41.861474   45244 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:41.861508   45244 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:41.861515   45244 kubeadm.go:322] 
	I0229 18:46:41.861542   45244 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:41.861574   45244 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:41.861691   45244 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:41.861703   45244 kubeadm.go:322] 
	I0229 18:46:41.861847   45244 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:41.861898   45244 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:41.861947   45244 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:41.861957   45244 kubeadm.go:322] 
	I0229 18:46:41.862088   45244 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:41.862219   45244 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:41.862337   45244 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:41.862416   45244 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:41.862530   45244 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:41.862653   45244 kubeadm.go:406] StartCluster complete in 8m6.209733519s
	I0229 18:46:41.862678   45244 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:41.862717   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0229 18:46:41.862784   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 18:46:41.922997   45244 cri.go:89] found id: ""
	I0229 18:46:41.923026   45244 logs.go:276] 0 containers: []
	W0229 18:46:41.923038   45244 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:46:41.923046   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0229 18:46:41.923115   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 18:46:41.974402   45244 cri.go:89] found id: ""
	I0229 18:46:41.974433   45244 logs.go:276] 0 containers: []
	W0229 18:46:41.974445   45244 logs.go:278] No container was found matching "etcd"
	I0229 18:46:41.974452   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0229 18:46:41.974529   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 18:46:42.045238   45244 cri.go:89] found id: ""
	I0229 18:46:42.045265   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.045276   45244 logs.go:278] No container was found matching "coredns"
	I0229 18:46:42.045283   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0229 18:46:42.045350   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 18:46:42.088338   45244 cri.go:89] found id: ""
	I0229 18:46:42.088365   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.088376   45244 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:46:42.088384   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0229 18:46:42.088450   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 18:46:42.129386   45244 cri.go:89] found id: ""
	I0229 18:46:42.129416   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.129428   45244 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:46:42.129435   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 18:46:42.129502   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 18:46:42.171873   45244 cri.go:89] found id: ""
	I0229 18:46:42.171894   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.171902   45244 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:46:42.171908   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0229 18:46:42.171958   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 18:46:42.211632   45244 cri.go:89] found id: ""
	I0229 18:46:42.211656   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.211664   45244 logs.go:278] No container was found matching "kindnet"
	I0229 18:46:42.211669   45244 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 18:46:42.211729   45244 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 18:46:42.261816   45244 cri.go:89] found id: ""
	I0229 18:46:42.261837   45244 logs.go:276] 0 containers: []
	W0229 18:46:42.261844   45244 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:46:42.261852   45244 logs.go:123] Gathering logs for kubelet ...
	I0229 18:46:42.261863   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:46:42.313140   45244 logs.go:123] Gathering logs for dmesg ...
	I0229 18:46:42.313173   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:46:42.327911   45244 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:46:42.327944   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:46:42.411111   45244 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:46:42.411164   45244 logs.go:123] Gathering logs for containerd ...
	I0229 18:46:42.411177   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0229 18:46:42.456959   45244 logs.go:123] Gathering logs for container status ...
	I0229 18:46:42.457002   45244 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:46:42.508698   45244 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:42.508753   45244 out.go:239] * 
	W0229 18:46:42.508820   45244 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:46:42.508841   45244 out.go:239] * 
	W0229 18:46:42.509757   45244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:46:42.512698   45244 out.go:177] 
	W0229 18:46:42.514014   45244 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:46:42.514077   45244 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:46:42.514104   45244 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:46:42.515555   45244 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.986741472Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987004867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987056148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987317854Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987441432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987501059Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987549532Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987598311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987871455Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987985628Z" level=info msg="Connect containerd service"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988045901Z" level=info msg="using legacy CRI server"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988078198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988266930Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989037697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989295153Z" level=info msg="Start subscribing containerd event"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989377058Z" level=info msg="Start recovering state"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990279282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990471179Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034388260Z" level=info msg="Start event monitor"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034498239Z" level=info msg="Start snapshots syncer"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034517505Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034527207Z" level=info msg="Start streaming server"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034557306Z" level=info msg="containerd successfully booted in 0.090065s"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052015588Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052339514Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052023] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.656777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.325515] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730699] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.588499] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +0.058606] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067411] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.168224] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
	[  +0.171213] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.318931] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +5.900853] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.061557] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.980225] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.271847] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[Feb29 18:42] systemd-fstab-generator[7934]: Ignoring "noauto" option for root device
	[  +0.070905] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9618]: Ignoring "noauto" option for root device
	[  +0.076239] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:46:44 up 8 min,  0 users,  load average: 0.89, 0.29, 0.11
	Linux old-k8s-version-561577 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11216]: F0229 18:46:42.017707   11216 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: I0229 18:46:42.966674   11285 server.go:410] Version: v1.16.0
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: I0229 18:46:42.967061   11285 plugins.go:100] No cloud provider specified.
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: I0229 18:46:42.967185   11285 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: I0229 18:46:42.975306   11285 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: W0229 18:46:42.977365   11285 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:46:42 old-k8s-version-561577 kubelet[11285]: F0229 18:46:42.977471   11285 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:46:42 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:46:43 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 18:46:43 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:46:43 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: I0229 18:46:43.713802   11330 server.go:410] Version: v1.16.0
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: I0229 18:46:43.714296   11330 plugins.go:100] No cloud provider specified.
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: I0229 18:46:43.714358   11330 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: I0229 18:46:43.716985   11330 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: W0229 18:46:43.718820   11330 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:46:43 old-k8s-version-561577 kubelet[11330]: F0229 18:46:43.718927   11330 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:46:43 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:46:43 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (263.706537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-561577" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (521.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:47:01.564799   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:47:42.525782   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:48:36.587111   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:48:39.147506   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:48:44.267946   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:48:54.508917   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:49:04.445975   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:49:14.989100   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:49:42.039272   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:49:55.949662   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:17.870668   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:19.951186   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:19.956518   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:19.966757   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:19.986959   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:20.027188   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:20.107509   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:20.267904   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:20.588992   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:51:20.603185   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:51:21.229968   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:22.510340   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:25.070609   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:28.014338   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.019611   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.029810   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.050011   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.090304   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.170634   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:28.331031   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:28.652071   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
E0229 18:51:29.293004   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:30.191207   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:30.573660   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:33.134342   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:33.750652   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:38.254578   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:40.431876   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:48.287051   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:51:48.495115   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:00.912161   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:08.975483   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:41.872544   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:46.915534   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:46.920788   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:46.931004   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:46.951243   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:46.991593   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:47.071876   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:47.232299   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:47.552451   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:48.193331   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:49.473926   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
E0229 18:52:49.936566   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:52.034959   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:52:57.156027   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:07.396495   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:21.897576   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:21.902855   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:21.913080   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:21.933316   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:21.973609   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:22.053980   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:22.214343   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:22.534494   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
E0229 18:53:23.175402   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:24.456500   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:27.017545   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:27.877087   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:32.138200   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:34.028697   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:53:42.378475   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:01.711583   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:02.858908   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:03.793706   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:08.837483   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:09.938331   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:09.943577   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:09.953929   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:09.974149   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:10.014472   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:10.094802   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:10.255194   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:10.576302   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
E0229 18:54:11.216482   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:11.857192   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:12.496941   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:15.057906   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:20.178117   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:30.419051   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:42.038997   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:43.819303   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:54:50.899722   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:00.899813   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:00.905100   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:00.915359   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:00.935602   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:00.975863   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:01.056175   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:01.216597   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:01.536857   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:02.177987   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:03.458778   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:06.019942   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:11.140226   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:21.381105   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:30.758530   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:31.860679   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:36.958531   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:36.963832   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:36.974068   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:36.994335   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:37.034610   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:37.114913   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:37.275300   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:37.596439   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
E0229 18:55:38.237367   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:39.517958   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:41.862110   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
E0229 18:55:42.078737   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (236.524615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-561577" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (221.105688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-561577 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-387000 sudo iptables                       | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo docker                         | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo find                           | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo crio                           | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-387000                                     | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:48:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:48:50.773132   56616 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:48:50.773365   56616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:48:50.773374   56616 out.go:304] Setting ErrFile to fd 2...
	I0229 18:48:50.773378   56616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:48:50.773574   56616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:48:50.774145   56616 out.go:298] Setting JSON to false
	I0229 18:48:50.775813   56616 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5472,"bootTime":1709227059,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:48:50.776131   56616 start.go:139] virtualization: kvm guest
	I0229 18:48:50.778009   56616 out.go:177] * [bridge-387000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:48:50.779099   56616 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:48:50.780171   56616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:48:50.779131   56616 notify.go:220] Checking for updates...
	I0229 18:48:50.782320   56616 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:48:50.783513   56616 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:50.784694   56616 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:48:50.785822   56616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:48:50.787580   56616 config.go:182] Loaded profile config "enable-default-cni-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:50.787729   56616 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:50.787857   56616 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:48:50.787939   56616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:48:50.822724   56616 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:48:50.824108   56616 start.go:299] selected driver: kvm2
	I0229 18:48:50.824118   56616 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:48:50.824128   56616 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:48:50.824768   56616 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:48:50.824842   56616 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:48:50.839423   56616 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:48:50.839458   56616 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:48:50.839652   56616 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:48:50.839707   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:48:50.839719   56616 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:48:50.839730   56616 start_flags.go:323] config:
	{Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:48:50.839839   56616 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:48:50.842288   56616 out.go:177] * Starting control plane node bridge-387000 in cluster bridge-387000
	I0229 18:48:48.639911   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:51.137879   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:51.047516   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.048138   54985 main.go:141] libmachine: (flannel-387000) Found IP for machine: 192.168.50.138
	I0229 18:48:51.048163   54985 main.go:141] libmachine: (flannel-387000) Reserving static IP address...
	I0229 18:48:51.048184   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has current primary IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.048529   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find host DHCP lease matching {name: "flannel-387000", mac: "52:54:00:39:87:55", ip: "192.168.50.138"} in network mk-flannel-387000
	I0229 18:48:51.120924   54985 main.go:141] libmachine: (flannel-387000) DBG | Getting to WaitForSSH function...
	I0229 18:48:51.120963   54985 main.go:141] libmachine: (flannel-387000) Reserved static IP address: 192.168.50.138
	I0229 18:48:51.120976   54985 main.go:141] libmachine: (flannel-387000) Waiting for SSH to be available...
	I0229 18:48:51.123675   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.123962   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000
	I0229 18:48:51.123987   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find defined IP address of network mk-flannel-387000 interface with MAC address 52:54:00:39:87:55
	I0229 18:48:51.124162   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH client type: external
	I0229 18:48:51.124187   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa (-rw-------)
	I0229 18:48:51.124218   54985 main.go:141] libmachine: (flannel-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:48:51.124230   54985 main.go:141] libmachine: (flannel-387000) DBG | About to run SSH command:
	I0229 18:48:51.124247   54985 main.go:141] libmachine: (flannel-387000) DBG | exit 0
	I0229 18:48:51.127797   54985 main.go:141] libmachine: (flannel-387000) DBG | SSH cmd err, output: exit status 255: 
	I0229 18:48:51.127823   54985 main.go:141] libmachine: (flannel-387000) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 18:48:51.127834   54985 main.go:141] libmachine: (flannel-387000) DBG | command : exit 0
	I0229 18:48:51.127845   54985 main.go:141] libmachine: (flannel-387000) DBG | err     : exit status 255
	I0229 18:48:51.127856   54985 main.go:141] libmachine: (flannel-387000) DBG | output  : 
	I0229 18:48:50.843585   56616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:48:50.843612   56616 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 18:48:50.843618   56616 cache.go:56] Caching tarball of preloaded images
	I0229 18:48:50.843706   56616 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:48:50.843719   56616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 18:48:50.843791   56616 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json ...
	I0229 18:48:50.843806   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json: {Name:mk17c54d02704fa964d1848bcdb1d8f1ad0d67be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:48:50.843941   56616 start.go:365] acquiring machines lock for bridge-387000: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:48:55.279726   56616 start.go:369] acquired machines lock for "bridge-387000" in 4.43574817s
	I0229 18:48:55.279785   56616 start.go:93] Provisioning new machine with config: &{Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:48:55.279947   56616 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:48:55.282286   56616 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 18:48:55.282483   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:48:55.282528   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:48:55.299090   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0229 18:48:55.299478   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:48:55.300044   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:48:55.300064   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:48:55.300367   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:48:55.300539   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:48:55.300689   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:48:55.300847   56616 start.go:159] libmachine.API.Create for "bridge-387000" (driver="kvm2")
	I0229 18:48:55.300887   56616 client.go:168] LocalClient.Create starting
	I0229 18:48:55.300919   56616 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 18:48:55.300957   56616 main.go:141] libmachine: Decoding PEM data...
	I0229 18:48:55.300978   56616 main.go:141] libmachine: Parsing certificate...
	I0229 18:48:55.301045   56616 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 18:48:55.301069   56616 main.go:141] libmachine: Decoding PEM data...
	I0229 18:48:55.301092   56616 main.go:141] libmachine: Parsing certificate...
	I0229 18:48:55.301117   56616 main.go:141] libmachine: Running pre-create checks...
	I0229 18:48:55.301135   56616 main.go:141] libmachine: (bridge-387000) Calling .PreCreateCheck
	I0229 18:48:55.301462   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:48:55.301887   56616 main.go:141] libmachine: Creating machine...
	I0229 18:48:55.301907   56616 main.go:141] libmachine: (bridge-387000) Calling .Create
	I0229 18:48:55.302064   56616 main.go:141] libmachine: (bridge-387000) Creating KVM machine...
	I0229 18:48:55.303167   56616 main.go:141] libmachine: (bridge-387000) DBG | found existing default KVM network
	I0229 18:48:55.304288   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.304131   56679 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:28:6a} reservation:<nil>}
	I0229 18:48:55.305179   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.305108   56679 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:76:62} reservation:<nil>}
	I0229 18:48:55.306011   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.305938   56679 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:cd:16} reservation:<nil>}
	I0229 18:48:55.307171   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.307074   56679 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a7980}
	I0229 18:48:55.312399   56616 main.go:141] libmachine: (bridge-387000) DBG | trying to create private KVM network mk-bridge-387000 192.168.72.0/24...
	I0229 18:48:55.388048   56616 main.go:141] libmachine: (bridge-387000) DBG | private KVM network mk-bridge-387000 192.168.72.0/24 created
	I0229 18:48:55.388081   56616 main.go:141] libmachine: (bridge-387000) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 ...
	I0229 18:48:55.388090   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.388013   56679 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:55.388152   56616 main.go:141] libmachine: (bridge-387000) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:48:55.388185   56616 main.go:141] libmachine: (bridge-387000) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:48:55.672301   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.672088   56679 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa...
	I0229 18:48:53.138358   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:55.637973   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:54.128463   54985 main.go:141] libmachine: (flannel-387000) DBG | Getting to WaitForSSH function...
	I0229 18:48:54.130836   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.131203   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.131233   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.131376   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH client type: external
	I0229 18:48:54.131405   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa (-rw-------)
	I0229 18:48:54.131431   54985 main.go:141] libmachine: (flannel-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:48:54.131445   54985 main.go:141] libmachine: (flannel-387000) DBG | About to run SSH command:
	I0229 18:48:54.131459   54985 main.go:141] libmachine: (flannel-387000) DBG | exit 0
	I0229 18:48:54.254430   54985 main.go:141] libmachine: (flannel-387000) DBG | SSH cmd err, output: <nil>: 
	I0229 18:48:54.254741   54985 main.go:141] libmachine: (flannel-387000) KVM machine creation complete!
	I0229 18:48:54.255063   54985 main.go:141] libmachine: (flannel-387000) Calling .GetConfigRaw
	I0229 18:48:54.255534   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:54.255734   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:54.255907   54985 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:48:54.255920   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:48:54.257161   54985 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:48:54.257175   54985 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:48:54.257180   54985 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:48:54.257186   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.259535   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.259914   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.259945   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.260057   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.260218   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.260371   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.260533   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.260687   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.260872   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.260882   54985 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:48:54.362305   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:48:54.362333   54985 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:48:54.362344   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.364852   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.365248   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.365284   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.365411   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.365605   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.365765   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.365910   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.366047   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.366217   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.366228   54985 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:48:54.476176   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:48:54.476231   54985 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:48:54.476237   54985 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:48:54.476244   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.476456   54985 buildroot.go:166] provisioning hostname "flannel-387000"
	I0229 18:48:54.476474   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.476653   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.479228   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.479574   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.479601   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.479814   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.480005   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.480193   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.480339   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.480513   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.480683   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.480694   54985 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-387000 && echo "flannel-387000" | sudo tee /etc/hostname
	I0229 18:48:54.599481   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-387000
	
	I0229 18:48:54.599508   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.602400   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.602741   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.602769   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.603004   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.603162   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.603382   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.603513   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.603668   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.603855   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.603878   54985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:48:54.717276   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:48:54.717305   54985 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:48:54.717329   54985 buildroot.go:174] setting up certificates
	I0229 18:48:54.717341   54985 provision.go:83] configureAuth start
	I0229 18:48:54.717368   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.717639   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:54.720649   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.721011   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.721036   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.721231   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.723814   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.724159   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.724189   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.724351   54985 provision.go:138] copyHostCerts
	I0229 18:48:54.724411   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:48:54.724427   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:48:54.724488   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:48:54.724578   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:48:54.724585   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:48:54.724608   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:48:54.724694   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:48:54.724701   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:48:54.724724   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:48:54.724811   54985 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.flannel-387000 san=[192.168.50.138 192.168.50.138 localhost 127.0.0.1 minikube flannel-387000]
	I0229 18:48:54.858082   54985 provision.go:172] copyRemoteCerts
	I0229 18:48:54.858139   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:48:54.858170   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.860744   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.861068   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.861093   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.861264   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.861446   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.861601   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.861790   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:54.950399   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:48:54.978245   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0229 18:48:55.009882   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:48:55.035218   54985 provision.go:86] duration metric: configureAuth took 317.866623ms
	I0229 18:48:55.035242   54985 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:48:55.035401   54985 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:55.035426   54985 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:48:55.035442   54985 main.go:141] libmachine: (flannel-387000) Calling .GetURL
	I0229 18:48:55.036662   54985 main.go:141] libmachine: (flannel-387000) DBG | Using libvirt version 6000000
	I0229 18:48:55.038759   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.039104   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.039134   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.039285   54985 main.go:141] libmachine: Docker is up and running!
	I0229 18:48:55.039304   54985 main.go:141] libmachine: Reticulating splines...
	I0229 18:48:55.039312   54985 client.go:171] LocalClient.Create took 31.777126651s
	I0229 18:48:55.039337   54985 start.go:167] duration metric: libmachine.API.Create for "flannel-387000" took 31.77720499s
	I0229 18:48:55.039347   54985 start.go:300] post-start starting for "flannel-387000" (driver="kvm2")
	I0229 18:48:55.039355   54985 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:48:55.039370   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.039619   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:48:55.039641   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.041889   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.042187   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.042223   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.042360   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.042583   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.042721   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.042836   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.126438   54985 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:48:55.131071   54985 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:48:55.131095   54985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:48:55.131163   54985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:48:55.131253   54985 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:48:55.131369   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:48:55.143382   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:48:55.170742   54985 start.go:303] post-start completed in 131.386068ms
	I0229 18:48:55.170782   54985 main.go:141] libmachine: (flannel-387000) Calling .GetConfigRaw
	I0229 18:48:55.171346   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:55.174022   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.174346   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.174380   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.174636   54985 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/config.json ...
	I0229 18:48:55.174797   54985 start.go:128] duration metric: createHost completed in 31.931014733s
	I0229 18:48:55.174818   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.176833   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.177128   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.177153   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.177323   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.177509   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.177663   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.177824   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.177959   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:55.178180   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:55.178197   54985 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:48:55.279538   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232535.269815607
	
	I0229 18:48:55.279559   54985 fix.go:206] guest clock: 1709232535.269815607
	I0229 18:48:55.279568   54985 fix.go:219] Guest: 2024-02-29 18:48:55.269815607 +0000 UTC Remote: 2024-02-29 18:48:55.174807849 +0000 UTC m=+32.064580051 (delta=95.007758ms)
	I0229 18:48:55.279626   54985 fix.go:190] guest clock delta is within tolerance: 95.007758ms
	I0229 18:48:55.279634   54985 start.go:83] releasing machines lock for "flannel-387000", held for 32.035959699s
	I0229 18:48:55.279668   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.279936   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:55.282606   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.282973   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.282999   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.283205   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283675   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283842   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283944   54985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:48:55.283988   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.284030   54985 ssh_runner.go:195] Run: cat /version.json
	I0229 18:48:55.284058   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.286624   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.286894   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287012   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.287034   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287208   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.287235   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.287241   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287414   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.287416   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.287616   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.287618   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.287809   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.287814   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.287920   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.394397   54985 ssh_runner.go:195] Run: systemctl --version
	I0229 18:48:55.402146   54985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:48:55.411717   54985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:48:55.411800   54985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:48:55.442497   54985 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:48:55.442525   54985 start.go:475] detecting cgroup driver to use...
	I0229 18:48:55.442612   54985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:48:55.740595   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:48:55.757748   54985 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:48:55.757797   54985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:48:55.775921   54985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:48:55.793972   54985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:48:55.927103   54985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:48:56.065634   54985 docker.go:233] disabling docker service ...
	I0229 18:48:56.065711   54985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:48:56.082468   54985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:48:56.097030   54985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:48:56.267367   54985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:48:56.393663   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:48:56.409105   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:48:56.430079   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:48:56.442345   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:48:56.453625   54985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:48:56.453677   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:48:56.465097   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:56.476377   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:48:56.492078   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:56.505091   54985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:48:56.516674   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:48:56.527676   54985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:48:56.537464   54985 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:48:56.537515   54985 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:48:56.552483   54985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:48:56.562495   54985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:48:56.705808   54985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:48:56.737148   54985 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:48:56.737243   54985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:48:56.742880   54985 retry.go:31] will retry after 1.363497332s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:48:58.106643   54985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:48:58.112963   54985 start.go:543] Will wait 60s for crictl version
	I0229 18:48:58.113022   54985 ssh_runner.go:195] Run: which crictl
	I0229 18:48:58.117960   54985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:48:58.158237   54985 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:48:58.158311   54985 ssh_runner.go:195] Run: containerd --version
	I0229 18:48:58.200231   54985 ssh_runner.go:195] Run: containerd --version
	I0229 18:48:58.230896   54985 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 18:48:55.905787   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.905640   56679 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/bridge-387000.rawdisk...
	I0229 18:48:55.905831   56616 main.go:141] libmachine: (bridge-387000) DBG | Writing magic tar header
	I0229 18:48:55.905845   56616 main.go:141] libmachine: (bridge-387000) DBG | Writing SSH key tar header
	I0229 18:48:55.905857   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.905790   56679 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 ...
	I0229 18:48:55.905964   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 (perms=drwx------)
	I0229 18:48:55.905988   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000
	I0229 18:48:55.905996   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:48:55.906027   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 18:48:55.906047   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:55.906069   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 18:48:55.906086   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 18:48:55.906099   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:48:55.906129   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:48:55.906148   56616 main.go:141] libmachine: (bridge-387000) Creating domain...
	I0229 18:48:55.906161   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 18:48:55.906175   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:48:55.906183   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:48:55.906195   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home
	I0229 18:48:55.906221   56616 main.go:141] libmachine: (bridge-387000) DBG | Skipping /home - not owner
	I0229 18:48:55.907207   56616 main.go:141] libmachine: (bridge-387000) define libvirt domain using xml: 
	I0229 18:48:55.907227   56616 main.go:141] libmachine: (bridge-387000) <domain type='kvm'>
	I0229 18:48:55.907234   56616 main.go:141] libmachine: (bridge-387000)   <name>bridge-387000</name>
	I0229 18:48:55.907240   56616 main.go:141] libmachine: (bridge-387000)   <memory unit='MiB'>3072</memory>
	I0229 18:48:55.907245   56616 main.go:141] libmachine: (bridge-387000)   <vcpu>2</vcpu>
	I0229 18:48:55.907249   56616 main.go:141] libmachine: (bridge-387000)   <features>
	I0229 18:48:55.907255   56616 main.go:141] libmachine: (bridge-387000)     <acpi/>
	I0229 18:48:55.907261   56616 main.go:141] libmachine: (bridge-387000)     <apic/>
	I0229 18:48:55.907266   56616 main.go:141] libmachine: (bridge-387000)     <pae/>
	I0229 18:48:55.907273   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907281   56616 main.go:141] libmachine: (bridge-387000)   </features>
	I0229 18:48:55.907293   56616 main.go:141] libmachine: (bridge-387000)   <cpu mode='host-passthrough'>
	I0229 18:48:55.907304   56616 main.go:141] libmachine: (bridge-387000)   
	I0229 18:48:55.907314   56616 main.go:141] libmachine: (bridge-387000)   </cpu>
	I0229 18:48:55.907332   56616 main.go:141] libmachine: (bridge-387000)   <os>
	I0229 18:48:55.907364   56616 main.go:141] libmachine: (bridge-387000)     <type>hvm</type>
	I0229 18:48:55.907377   56616 main.go:141] libmachine: (bridge-387000)     <boot dev='cdrom'/>
	I0229 18:48:55.907386   56616 main.go:141] libmachine: (bridge-387000)     <boot dev='hd'/>
	I0229 18:48:55.907399   56616 main.go:141] libmachine: (bridge-387000)     <bootmenu enable='no'/>
	I0229 18:48:55.907412   56616 main.go:141] libmachine: (bridge-387000)   </os>
	I0229 18:48:55.907437   56616 main.go:141] libmachine: (bridge-387000)   <devices>
	I0229 18:48:55.907459   56616 main.go:141] libmachine: (bridge-387000)     <disk type='file' device='cdrom'>
	I0229 18:48:55.907481   56616 main.go:141] libmachine: (bridge-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/boot2docker.iso'/>
	I0229 18:48:55.907495   56616 main.go:141] libmachine: (bridge-387000)       <target dev='hdc' bus='scsi'/>
	I0229 18:48:55.907508   56616 main.go:141] libmachine: (bridge-387000)       <readonly/>
	I0229 18:48:55.907518   56616 main.go:141] libmachine: (bridge-387000)     </disk>
	I0229 18:48:55.907531   56616 main.go:141] libmachine: (bridge-387000)     <disk type='file' device='disk'>
	I0229 18:48:55.907557   56616 main.go:141] libmachine: (bridge-387000)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:48:55.907574   56616 main.go:141] libmachine: (bridge-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/bridge-387000.rawdisk'/>
	I0229 18:48:55.907586   56616 main.go:141] libmachine: (bridge-387000)       <target dev='hda' bus='virtio'/>
	I0229 18:48:55.907594   56616 main.go:141] libmachine: (bridge-387000)     </disk>
	I0229 18:48:55.907602   56616 main.go:141] libmachine: (bridge-387000)     <interface type='network'>
	I0229 18:48:55.907612   56616 main.go:141] libmachine: (bridge-387000)       <source network='mk-bridge-387000'/>
	I0229 18:48:55.907623   56616 main.go:141] libmachine: (bridge-387000)       <model type='virtio'/>
	I0229 18:48:55.907631   56616 main.go:141] libmachine: (bridge-387000)     </interface>
	I0229 18:48:55.907642   56616 main.go:141] libmachine: (bridge-387000)     <interface type='network'>
	I0229 18:48:55.907655   56616 main.go:141] libmachine: (bridge-387000)       <source network='default'/>
	I0229 18:48:55.907670   56616 main.go:141] libmachine: (bridge-387000)       <model type='virtio'/>
	I0229 18:48:55.907680   56616 main.go:141] libmachine: (bridge-387000)     </interface>
	I0229 18:48:55.907690   56616 main.go:141] libmachine: (bridge-387000)     <serial type='pty'>
	I0229 18:48:55.907699   56616 main.go:141] libmachine: (bridge-387000)       <target port='0'/>
	I0229 18:48:55.907709   56616 main.go:141] libmachine: (bridge-387000)     </serial>
	I0229 18:48:55.907717   56616 main.go:141] libmachine: (bridge-387000)     <console type='pty'>
	I0229 18:48:55.907728   56616 main.go:141] libmachine: (bridge-387000)       <target type='serial' port='0'/>
	I0229 18:48:55.907746   56616 main.go:141] libmachine: (bridge-387000)     </console>
	I0229 18:48:55.907765   56616 main.go:141] libmachine: (bridge-387000)     <rng model='virtio'>
	I0229 18:48:55.907778   56616 main.go:141] libmachine: (bridge-387000)       <backend model='random'>/dev/random</backend>
	I0229 18:48:55.907790   56616 main.go:141] libmachine: (bridge-387000)     </rng>
	I0229 18:48:55.907820   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907837   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907848   56616 main.go:141] libmachine: (bridge-387000)   </devices>
	I0229 18:48:55.907861   56616 main.go:141] libmachine: (bridge-387000) </domain>
	I0229 18:48:55.907874   56616 main.go:141] libmachine: (bridge-387000) 
	I0229 18:48:55.986324   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:0b:a3:b8 in network default
	I0229 18:48:55.987007   56616 main.go:141] libmachine: (bridge-387000) Ensuring networks are active...
	I0229 18:48:55.987043   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:55.987631   56616 main.go:141] libmachine: (bridge-387000) Ensuring network default is active
	I0229 18:48:55.988035   56616 main.go:141] libmachine: (bridge-387000) Ensuring network mk-bridge-387000 is active
	I0229 18:48:55.988602   56616 main.go:141] libmachine: (bridge-387000) Getting domain xml...
	I0229 18:48:55.989337   56616 main.go:141] libmachine: (bridge-387000) Creating domain...
	I0229 18:48:57.278411   56616 main.go:141] libmachine: (bridge-387000) Waiting to get IP...
	I0229 18:48:57.279142   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.279606   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.279635   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.279570   56679 retry.go:31] will retry after 272.020032ms: waiting for machine to come up
	I0229 18:48:57.552974   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.553494   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.553524   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.553449   56679 retry.go:31] will retry after 361.14125ms: waiting for machine to come up
	I0229 18:48:57.916017   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.916519   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.916547   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.916480   56679 retry.go:31] will retry after 433.645136ms: waiting for machine to come up
	I0229 18:48:58.352062   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:58.352615   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:58.352648   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:58.352560   56679 retry.go:31] will retry after 586.599788ms: waiting for machine to come up
	I0229 18:48:58.940663   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:58.941363   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:58.941401   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:58.941267   56679 retry.go:31] will retry after 694.893907ms: waiting for machine to come up
	I0229 18:48:59.638320   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:59.639177   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:59.639638   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:59.639156   56679 retry.go:31] will retry after 616.373171ms: waiting for machine to come up
	I0229 18:49:00.256713   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:00.257280   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:00.257337   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:00.257213   56679 retry.go:31] will retry after 946.181658ms: waiting for machine to come up
	I0229 18:48:57.640616   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:00.142378   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:58.232255   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:58.235077   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:58.235503   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:58.235534   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:58.235706   54985 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:48:58.240459   54985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:48:58.254807   54985 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:48:58.254874   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:48:58.292420   54985 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:48:58.292504   54985 ssh_runner.go:195] Run: which lz4
	I0229 18:48:58.297357   54985 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:48:58.302502   54985 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:48:58.302535   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 18:49:00.245621   54985 containerd.go:548] Took 1.948289 seconds to copy over tarball
	I0229 18:49:00.245695   54985 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:49:03.216854   54985 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.97113447s)
	I0229 18:49:03.311255   54985 containerd.go:555] Took 3.065601 seconds to extract the tarball
	I0229 18:49:03.311279   54985 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:49:03.355068   54985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:03.482825   54985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:03.512774   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:03.555996   54985 retry.go:31] will retry after 374.597303ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:49:03Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:49:03.931703   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:03.974702   54985 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:49:03.974727   54985 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:49:03.974783   54985 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:49:04.015211   54985 cni.go:84] Creating CNI manager for "flannel"
	I0229 18:49:04.015239   54985 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:49:04.015256   54985 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-387000 NodeName:flannel-387000 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:49:04.015364   54985 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-387000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:49:04.015429   54985 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=flannel-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:flannel-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
	I0229 18:49:04.015479   54985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:49:04.027273   54985 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:49:04.027349   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:49:04.038618   54985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:49:04.057180   54985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:49:04.075354   54985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 18:49:04.093446   54985 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0229 18:49:04.097862   54985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:04.112661   54985 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000 for IP: 192.168.50.138
	I0229 18:49:04.112689   54985 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.112846   54985 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:49:04.112898   54985 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:49:04.112955   54985 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key
	I0229 18:49:04.112968   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt with IP's: []
	I0229 18:49:04.246708   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt ...
	I0229 18:49:04.246740   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: {Name:mkd2ec537db5870bae60b08d4f72854668507412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.246931   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key ...
	I0229 18:49:04.246945   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key: {Name:mk3766f09d804b8c79adb8c2906ce65c768652b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.247039   54985 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076
	I0229 18:49:04.247056   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 with IP's: [192.168.50.138 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:49:04.301273   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 ...
	I0229 18:49:04.301300   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076: {Name:mk7169c456aa9a4ecf986b00db44d47f2dc907ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.301465   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076 ...
	I0229 18:49:04.301481   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076: {Name:mk6393b99ec929fb754394229a6c7159a47bb763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.301569   54985 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt
	I0229 18:49:04.301660   54985 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key
	I0229 18:49:04.301754   54985 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key
	I0229 18:49:04.301769   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt with IP's: []
	I0229 18:49:04.572734   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt ...
	I0229 18:49:04.572764   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt: {Name:mk63899018d8766e5f7ceac2248de6529e432cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.572949   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key ...
	I0229 18:49:04.572964   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key: {Name:mk348b42238503d8f73773c2471539498f37e200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.573159   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:49:04.573208   54985 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:49:04.573220   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:49:04.573257   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:49:04.573302   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:49:04.573337   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:49:04.573396   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:04.574006   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:49:04.606754   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:49:04.632828   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:49:04.659118   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:49:04.686022   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:49:04.712702   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:49:04.739880   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:49:04.767247   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:49:04.797719   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:49:04.824352   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:49:04.850637   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:49:04.877085   54985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:49:04.895419   54985 ssh_runner.go:195] Run: openssl version
	I0229 18:49:04.901474   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:49:04.913206   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.918167   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.918233   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.924528   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:49:04.935997   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:49:04.951762   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.958212   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.958289   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.965363   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:49:04.976766   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:49:04.988980   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:49:04.994098   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:49:04.994144   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:49:05.000698   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:49:05.012981   54985 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:49:05.017989   54985 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:49:05.018048   54985 kubeadm.go:404] StartCluster: {Name:flannel-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:flannel-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:49:05.018149   54985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:49:05.018218   54985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:49:05.065799   54985 cri.go:89] found id: ""
	I0229 18:49:05.065931   54985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:49:05.076645   54985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:49:05.087554   54985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:05.098425   54985 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:05.098473   54985 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:49:05.165291   54985 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:49:05.165417   54985 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:49:05.325025   54985 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:49:05.325164   54985 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:49:05.325291   54985 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:49:05.561296   54985 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:49:01.204867   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:01.205369   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:01.205398   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:01.205320   56679 retry.go:31] will retry after 1.269210028s: waiting for machine to come up
	I0229 18:49:02.475729   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:02.476324   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:02.476372   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:02.476283   56679 retry.go:31] will retry after 1.35365046s: waiting for machine to come up
	I0229 18:49:03.831686   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:03.832193   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:03.832234   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:03.832163   56679 retry.go:31] will retry after 1.727519863s: waiting for machine to come up
	I0229 18:49:05.561673   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:05.562340   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:05.562365   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:05.562260   56679 retry.go:31] will retry after 1.769800655s: waiting for machine to come up
	I0229 18:49:02.668516   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:05.139882   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:05.564077   54985 out.go:204]   - Generating certificates and keys ...
	I0229 18:49:05.564186   54985 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:49:05.564302   54985 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:49:06.166833   54985 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:49:06.270209   54985 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:49:06.471361   54985 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:49:06.592112   54985 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:49:06.683086   54985 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:49:06.683244   54985 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [flannel-387000 localhost] and IPs [192.168.50.138 127.0.0.1 ::1]
	I0229 18:49:07.030753   54985 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:49:07.034577   54985 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [flannel-387000 localhost] and IPs [192.168.50.138 127.0.0.1 ::1]
	I0229 18:49:07.124341   54985 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:49:07.273168   54985 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:49:07.374288   54985 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:49:07.374643   54985 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:49:07.546728   54985 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:49:07.794758   54985 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:49:07.980088   54985 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:49:08.090238   54985 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:49:08.091197   54985 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:49:08.096118   54985 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:49:08.097933   54985 out.go:204]   - Booting up control plane ...
	I0229 18:49:08.098088   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:49:08.098178   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:49:08.098258   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:49:08.120974   54985 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:49:08.122094   54985 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:49:08.122162   54985 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:49:07.637496   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:08.139758   52876 pod_ready.go:92] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.139786   52876 pod_ready.go:81] duration metric: took 40.009402771s waiting for pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.139801   52876 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.146783   52876 pod_ready.go:92] pod "etcd-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.146809   52876 pod_ready.go:81] duration metric: took 7.000584ms waiting for pod "etcd-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.146821   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.154109   52876 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.154169   52876 pod_ready.go:81] duration metric: took 7.338039ms waiting for pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.154189   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.160099   52876 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.160117   52876 pod_ready.go:81] duration metric: took 5.91974ms waiting for pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.160130   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-g9phw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.166492   52876 pod_ready.go:92] pod "kube-proxy-g9phw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.166510   52876 pod_ready.go:81] duration metric: took 6.371891ms waiting for pod "kube-proxy-g9phw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.166521   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.535773   52876 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.535801   52876 pod_ready.go:81] duration metric: took 369.272066ms waiting for pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.535814   52876 pod_ready.go:38] duration metric: took 40.417024581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:08.535834   52876 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:49:08.535895   52876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:49:08.557861   52876 api_server.go:72] duration metric: took 41.867654795s to wait for apiserver process to appear ...
	I0229 18:49:08.557884   52876 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:49:08.557903   52876 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8443/healthz ...
	I0229 18:49:08.564318   52876 api_server.go:279] https://192.168.61.38:8443/healthz returned 200:
	ok
	I0229 18:49:08.565970   52876 api_server.go:141] control plane version: v1.28.4
	I0229 18:49:08.565995   52876 api_server.go:131] duration metric: took 8.1035ms to wait for apiserver health ...
	I0229 18:49:08.566005   52876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:49:08.737834   52876 system_pods.go:59] 7 kube-system pods found
	I0229 18:49:08.737862   52876 system_pods.go:61] "coredns-5dd5756b68-h7tnh" [8378f6a0-03e8-45a0-822a-80b30208ddaa] Running
	I0229 18:49:08.737867   52876 system_pods.go:61] "etcd-enable-default-cni-387000" [37ef7d80-af93-422f-b188-1a817ae2d1e9] Running
	I0229 18:49:08.737874   52876 system_pods.go:61] "kube-apiserver-enable-default-cni-387000" [b5ee1f4d-681b-4788-ae39-d53a726f677c] Running
	I0229 18:49:08.737877   52876 system_pods.go:61] "kube-controller-manager-enable-default-cni-387000" [46c4c5d2-8656-42b9-8d43-a9006932902d] Running
	I0229 18:49:08.737880   52876 system_pods.go:61] "kube-proxy-g9phw" [f82d8097-989f-4e89-ad51-8ba63677e2f6] Running
	I0229 18:49:08.737884   52876 system_pods.go:61] "kube-scheduler-enable-default-cni-387000" [3ef0b6d3-9a60-4a3b-bccc-416fd65b2457] Running
	I0229 18:49:08.737886   52876 system_pods.go:61] "storage-provisioner" [0f5d9674-54f5-4e0b-9ba7-2dc1ee8477f9] Running
	I0229 18:49:08.737892   52876 system_pods.go:74] duration metric: took 171.881248ms to wait for pod list to return data ...
	I0229 18:49:08.737899   52876 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:49:08.935934   52876 default_sa.go:45] found service account: "default"
	I0229 18:49:08.935960   52876 default_sa.go:55] duration metric: took 198.054097ms for default service account to be created ...
	I0229 18:49:08.935969   52876 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:49:09.137364   52876 system_pods.go:86] 7 kube-system pods found
	I0229 18:49:09.137391   52876 system_pods.go:89] "coredns-5dd5756b68-h7tnh" [8378f6a0-03e8-45a0-822a-80b30208ddaa] Running
	I0229 18:49:09.137398   52876 system_pods.go:89] "etcd-enable-default-cni-387000" [37ef7d80-af93-422f-b188-1a817ae2d1e9] Running
	I0229 18:49:09.137403   52876 system_pods.go:89] "kube-apiserver-enable-default-cni-387000" [b5ee1f4d-681b-4788-ae39-d53a726f677c] Running
	I0229 18:49:09.137407   52876 system_pods.go:89] "kube-controller-manager-enable-default-cni-387000" [46c4c5d2-8656-42b9-8d43-a9006932902d] Running
	I0229 18:49:09.137410   52876 system_pods.go:89] "kube-proxy-g9phw" [f82d8097-989f-4e89-ad51-8ba63677e2f6] Running
	I0229 18:49:09.137415   52876 system_pods.go:89] "kube-scheduler-enable-default-cni-387000" [3ef0b6d3-9a60-4a3b-bccc-416fd65b2457] Running
	I0229 18:49:09.137418   52876 system_pods.go:89] "storage-provisioner" [0f5d9674-54f5-4e0b-9ba7-2dc1ee8477f9] Running
	I0229 18:49:09.137426   52876 system_pods.go:126] duration metric: took 201.45074ms to wait for k8s-apps to be running ...
	I0229 18:49:09.137434   52876 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:49:09.137485   52876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:09.153880   52876 system_svc.go:56] duration metric: took 16.434869ms WaitForService to wait for kubelet.
	I0229 18:49:09.153915   52876 kubeadm.go:581] duration metric: took 42.463713197s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:49:09.153938   52876 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:49:09.340471   52876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:49:09.340502   52876 node_conditions.go:123] node cpu capacity is 2
	I0229 18:49:09.340515   52876 node_conditions.go:105] duration metric: took 186.571706ms to run NodePressure ...
	I0229 18:49:09.340528   52876 start.go:228] waiting for startup goroutines ...
	I0229 18:49:09.340536   52876 start.go:233] waiting for cluster config update ...
	I0229 18:49:09.340549   52876 start.go:242] writing updated cluster config ...
	I0229 18:49:09.340800   52876 ssh_runner.go:195] Run: rm -f paused
	I0229 18:49:09.400557   52876 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:49:09.403260   52876 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-387000" cluster and "default" namespace by default
	I0229 18:49:07.333748   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:07.334330   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:07.334355   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:07.334288   56679 retry.go:31] will retry after 3.500057333s: waiting for machine to come up
	I0229 18:49:08.290891   54985 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:49:10.835648   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:10.836226   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:10.836253   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:10.836169   56679 retry.go:31] will retry after 3.989790949s: waiting for machine to come up
	I0229 18:49:14.828360   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:14.828762   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:14.828794   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:14.828711   56679 retry.go:31] will retry after 4.551150284s: waiting for machine to come up
	I0229 18:49:14.792864   54985 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502792 seconds
	I0229 18:49:14.793025   54985 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:49:14.812677   54985 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:49:15.345116   54985 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:49:15.345370   54985 kubeadm.go:322] [mark-control-plane] Marking the node flannel-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:49:15.860380   54985 kubeadm.go:322] [bootstrap-token] Using token: fzw3xu.gjzf53iobyclbb8f
	I0229 18:49:15.862090   54985 out.go:204]   - Configuring RBAC rules ...
	I0229 18:49:15.862196   54985 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:49:15.883610   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:49:15.914462   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:49:15.930784   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:49:15.936444   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:49:15.940632   54985 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:49:15.961392   54985 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:49:16.210930   54985 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:49:16.293240   54985 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:49:16.295565   54985 kubeadm.go:322] 
	I0229 18:49:16.295660   54985 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:49:16.295696   54985 kubeadm.go:322] 
	I0229 18:49:16.295824   54985 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:49:16.295843   54985 kubeadm.go:322] 
	I0229 18:49:16.295878   54985 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:49:16.295961   54985 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:49:16.296041   54985 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:49:16.296050   54985 kubeadm.go:322] 
	I0229 18:49:16.296127   54985 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:49:16.296137   54985 kubeadm.go:322] 
	I0229 18:49:16.296225   54985 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:49:16.296233   54985 kubeadm.go:322] 
	I0229 18:49:16.296303   54985 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:49:16.296408   54985 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:49:16.296505   54985 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:49:16.296519   54985 kubeadm.go:322] 
	I0229 18:49:16.297058   54985 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:49:16.297148   54985 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:49:16.297162   54985 kubeadm.go:322] 
	I0229 18:49:16.298485   54985 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fzw3xu.gjzf53iobyclbb8f \
	I0229 18:49:16.298641   54985 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e \
	I0229 18:49:16.298683   54985 kubeadm.go:322] 	--control-plane 
	I0229 18:49:16.298693   54985 kubeadm.go:322] 
	I0229 18:49:16.298817   54985 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:49:16.298828   54985 kubeadm.go:322] 
	I0229 18:49:16.298955   54985 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fzw3xu.gjzf53iobyclbb8f \
	I0229 18:49:16.299103   54985 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e 
	I0229 18:49:16.300315   54985 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:49:16.300349   54985 cni.go:84] Creating CNI manager for "flannel"
	I0229 18:49:16.301954   54985 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0229 18:49:16.303080   54985 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:49:16.315159   54985 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:49:16.315174   54985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4407 bytes)
	I0229 18:49:16.344591   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:49:17.371587   54985 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.026955401s)
	I0229 18:49:17.371662   54985 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:49:17.371774   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=flannel-387000 minikube.k8s.io/updated_at=2024_02_29T18_49_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:17.371780   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:17.403191   54985 ops.go:34] apiserver oom_adj: -16
	I0229 18:49:17.571494   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:18.072399   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.383783   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.384373   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has current primary IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.384400   56616 main.go:141] libmachine: (bridge-387000) Found IP for machine: 192.168.72.206
	I0229 18:49:19.384413   56616 main.go:141] libmachine: (bridge-387000) Reserving static IP address...
	I0229 18:49:19.384745   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find host DHCP lease matching {name: "bridge-387000", mac: "52:54:00:e7:3d:17", ip: "192.168.72.206"} in network mk-bridge-387000
	I0229 18:49:19.467084   56616 main.go:141] libmachine: (bridge-387000) Reserved static IP address: 192.168.72.206
	I0229 18:49:19.467116   56616 main.go:141] libmachine: (bridge-387000) DBG | Getting to WaitForSSH function...
	I0229 18:49:19.467131   56616 main.go:141] libmachine: (bridge-387000) Waiting for SSH to be available...
	I0229 18:49:19.470103   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.470604   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.470634   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.470883   56616 main.go:141] libmachine: (bridge-387000) DBG | Using SSH client type: external
	I0229 18:49:19.470915   56616 main.go:141] libmachine: (bridge-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa (-rw-------)
	I0229 18:49:19.470955   56616 main.go:141] libmachine: (bridge-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:49:19.470968   56616 main.go:141] libmachine: (bridge-387000) DBG | About to run SSH command:
	I0229 18:49:19.470980   56616 main.go:141] libmachine: (bridge-387000) DBG | exit 0
	I0229 18:49:19.607432   56616 main.go:141] libmachine: (bridge-387000) DBG | SSH cmd err, output: <nil>: 
	I0229 18:49:19.607699   56616 main.go:141] libmachine: (bridge-387000) KVM machine creation complete!
	I0229 18:49:19.608049   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:49:19.608585   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:19.608830   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:19.608991   56616 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:49:19.609007   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:19.610370   56616 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:49:19.610388   56616 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:49:19.610394   56616 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:49:19.610400   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.612950   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.613296   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.613330   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.613454   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.613634   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.613806   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.613963   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.614133   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.614382   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.614398   56616 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:49:19.734312   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:49:19.734336   56616 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:49:19.734347   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.737388   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.737754   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.737783   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.737904   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.738096   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.738281   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.738431   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.738649   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.738844   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.738856   56616 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:49:19.848276   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:49:19.848367   56616 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:49:19.848392   56616 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:49:19.848407   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:19.848649   56616 buildroot.go:166] provisioning hostname "bridge-387000"
	I0229 18:49:19.848673   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:19.848904   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.851556   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.851862   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.851890   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.852064   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.852270   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.852422   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.852549   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.852682   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.852868   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.852886   56616 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-387000 && echo "bridge-387000" | sudo tee /etc/hostname
	I0229 18:49:19.979574   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-387000
	
	I0229 18:49:19.979606   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.982451   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.982866   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.982892   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.983066   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.983280   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.983460   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.983660   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.983817   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.984024   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.984047   56616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:49:20.103897   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:49:20.103928   56616 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:49:20.103958   56616 buildroot.go:174] setting up certificates
	I0229 18:49:20.103970   56616 provision.go:83] configureAuth start
	I0229 18:49:20.103979   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:20.104245   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.107051   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.107486   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.107524   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.107722   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.109836   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.110236   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.110275   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.110385   56616 provision.go:138] copyHostCerts
	I0229 18:49:20.110458   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:49:20.110479   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:49:20.110574   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:49:20.110744   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:49:20.110757   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:49:20.110791   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:49:20.110880   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:49:20.110891   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:49:20.110917   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:49:20.111011   56616 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.bridge-387000 san=[192.168.72.206 192.168.72.206 localhost 127.0.0.1 minikube bridge-387000]
	I0229 18:49:20.410804   56616 provision.go:172] copyRemoteCerts
	I0229 18:49:20.410861   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:49:20.410881   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.413655   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.414043   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.414071   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.414332   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.414499   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.414691   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.414834   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.497270   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:49:20.525867   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 18:49:20.552476   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:49:20.580240   56616 provision.go:86] duration metric: configureAuth took 476.257842ms
	I0229 18:49:20.580265   56616 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:49:20.580428   56616 config.go:182] Loaded profile config "bridge-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:20.580448   56616 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:49:20.580457   56616 main.go:141] libmachine: (bridge-387000) Calling .GetURL
	I0229 18:49:20.581631   56616 main.go:141] libmachine: (bridge-387000) DBG | Using libvirt version 6000000
	I0229 18:49:20.584136   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.584479   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.584506   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.584667   56616 main.go:141] libmachine: Docker is up and running!
	I0229 18:49:20.584681   56616 main.go:141] libmachine: Reticulating splines...
	I0229 18:49:20.584686   56616 client.go:171] LocalClient.Create took 25.283791742s
	I0229 18:49:20.584706   56616 start.go:167] duration metric: libmachine.API.Create for "bridge-387000" took 25.283858614s
	I0229 18:49:20.584723   56616 start.go:300] post-start starting for "bridge-387000" (driver="kvm2")
	I0229 18:49:20.584747   56616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:49:20.584769   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.584984   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:49:20.585022   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.587316   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.587635   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.587662   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.587854   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.588015   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.588157   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.588290   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.670225   56616 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:49:20.675577   56616 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:49:20.675602   56616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:49:20.675691   56616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:49:20.675776   56616 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:49:20.675893   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:49:20.686838   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:20.716655   56616 start.go:303] post-start completed in 131.906511ms
	I0229 18:49:20.716705   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:49:20.717307   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.720080   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.720472   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.720500   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.720732   56616 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json ...
	I0229 18:49:20.720904   56616 start.go:128] duration metric: createHost completed in 25.440945694s
	I0229 18:49:20.720926   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.723089   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.723459   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.723488   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.723652   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.723817   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.723971   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.724130   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.724299   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:20.724453   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:20.724464   56616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:49:20.835532   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232560.812399754
	
	I0229 18:49:20.835558   56616 fix.go:206] guest clock: 1709232560.812399754
	I0229 18:49:20.835568   56616 fix.go:219] Guest: 2024-02-29 18:49:20.812399754 +0000 UTC Remote: 2024-02-29 18:49:20.720917042 +0000 UTC m=+29.991679352 (delta=91.482712ms)
	I0229 18:49:20.835595   56616 fix.go:190] guest clock delta is within tolerance: 91.482712ms
	I0229 18:49:20.835607   56616 start.go:83] releasing machines lock for "bridge-387000", held for 25.555852929s
	I0229 18:49:20.835636   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.835933   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.838370   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.838785   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.838813   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.838942   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839411   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839578   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839675   56616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:49:20.839726   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.839837   56616 ssh_runner.go:195] Run: cat /version.json
	I0229 18:49:20.839861   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.842320   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842594   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842695   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.842726   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842906   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.843005   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.843025   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.843079   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.843212   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.843281   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.843352   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.843414   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.843535   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.843651   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.942136   56616 ssh_runner.go:195] Run: systemctl --version
	I0229 18:49:20.949065   56616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:49:20.955541   56616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:49:20.955595   56616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:49:20.973448   56616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:49:20.973470   56616 start.go:475] detecting cgroup driver to use...
	I0229 18:49:20.973533   56616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:49:21.005357   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:49:21.022925   56616 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:49:21.022981   56616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:49:21.039112   56616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:49:21.057247   56616 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:49:21.212886   56616 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:49:21.370010   56616 docker.go:233] disabling docker service ...
	I0229 18:49:21.370083   56616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:49:21.386034   56616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:49:21.400407   56616 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:49:21.536035   56616 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:49:21.676667   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:49:21.692656   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:49:21.713885   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:49:21.725087   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:49:21.736570   56616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:49:21.736626   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:49:21.748441   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:49:21.760072   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:49:21.771541   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:49:21.783060   56616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:49:21.795082   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:49:21.806530   56616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:49:21.817316   56616 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:49:21.817363   56616 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:49:21.831798   56616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:49:21.841878   56616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:21.966256   56616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:21.998492   56616 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:49:21.998590   56616 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:49:22.004519   56616 retry.go:31] will retry after 549.732234ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:49:22.555304   56616 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:49:22.561632   56616 start.go:543] Will wait 60s for crictl version
	I0229 18:49:22.561691   56616 ssh_runner.go:195] Run: which crictl
	I0229 18:49:22.566407   56616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:49:22.608717   56616 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:49:22.608795   56616 ssh_runner.go:195] Run: containerd --version
	I0229 18:49:22.640336   56616 ssh_runner.go:195] Run: containerd --version
	I0229 18:49:22.680031   56616 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 18:49:18.571782   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.072566   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.572515   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:20.071582   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:20.571812   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:21.072193   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:21.572064   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.071516   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.571542   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:23.072295   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.681459   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:22.684177   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:22.684547   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:22.684578   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:22.684769   56616 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:49:22.690361   56616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:22.707604   56616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:49:22.707655   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:22.745534   56616 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:49:22.745594   56616 ssh_runner.go:195] Run: which lz4
	I0229 18:49:22.750429   56616 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:49:22.755295   56616 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:49:22.755334   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 18:49:24.704924   56616 containerd.go:548] Took 1.954529 seconds to copy over tarball
	I0229 18:49:24.705016   56616 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:49:23.572426   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:24.071621   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:24.572409   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:25.071577   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:25.571832   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:26.071861   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:26.572338   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:27.072199   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:27.571695   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:28.072553   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:28.101160   56616 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.396112309s)
	I0229 18:49:28.101213   56616 containerd.go:555] Took 3.396233 seconds to extract the tarball
	I0229 18:49:28.101226   56616 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:49:28.154976   56616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:28.289077   56616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:28.320670   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:28.355635   56616 retry.go:31] will retry after 138.96002ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:49:28Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:49:28.495035   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:28.553988   56616 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:49:28.554009   56616 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:49:28.554060   56616 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:49:28.600664   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:49:28.600693   56616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:49:28.600714   56616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.206 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-387000 NodeName:bridge-387000 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:49:28.600848   56616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-387000"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:49:28.600942   56616 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=bridge-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0229 18:49:28.601002   56616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:49:28.616140   56616 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:49:28.616209   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:49:28.633664   56616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0229 18:49:28.658539   56616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:49:28.684967   56616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0229 18:49:28.710713   56616 ssh_runner.go:195] Run: grep 192.168.72.206	control-plane.minikube.internal$ /etc/hosts
	I0229 18:49:28.716105   56616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:28.733367   56616 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000 for IP: 192.168.72.206
	I0229 18:49:28.733415   56616 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.733573   56616 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:49:28.733623   56616 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:49:28.733680   56616 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key
	I0229 18:49:28.733694   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt with IP's: []
	I0229 18:49:28.791010   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt ...
	I0229 18:49:28.791038   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: {Name:mk50543b4974f7b0d4a09fb2870e44081bb4582d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.835058   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key ...
	I0229 18:49:28.835097   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key: {Name:mke5f475ff44a7d60f463fae93efe5254b8a5c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.835232   56616 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3
	I0229 18:49:28.835254   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 with IP's: [192.168.72.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:49:29.019897   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 ...
	I0229 18:49:29.019921   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3: {Name:mk4dac7431c0dfd64561c8fd1f0f4cb186755cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.020052   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3 ...
	I0229 18:49:29.020064   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3: {Name:mk4f643b99bfb2c97bb2ca84f2a221c98ae6ea1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.020133   56616 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt
	I0229 18:49:29.020216   56616 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key
	I0229 18:49:29.020287   56616 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key
	I0229 18:49:29.020300   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt with IP's: []
	I0229 18:49:29.156471   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt ...
	I0229 18:49:29.156495   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt: {Name:mk039e7b11fadfb2bda49a067152e4dd8bb9c470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.156662   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key ...
	I0229 18:49:29.156676   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key: {Name:mkb90e2c6b23fc807ff57dc47401135b79347487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.156862   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:49:29.156903   56616 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:49:29.156911   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:49:29.156936   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:49:29.156960   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:49:29.156984   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:49:29.157020   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:29.157577   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:49:29.190200   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:49:29.220613   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:49:29.266511   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:49:29.295203   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:49:29.327955   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:49:29.356419   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:49:29.387697   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:49:29.420770   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:49:29.449894   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:49:29.479202   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:49:29.510064   56616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:49:29.530121   56616 ssh_runner.go:195] Run: openssl version
	I0229 18:49:29.536839   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:49:29.550163   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.555595   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.555652   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.562567   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:49:29.576224   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:49:29.591681   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.598637   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.598695   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.607598   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:49:29.623118   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:49:29.636932   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.642481   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.642577   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.649436   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:49:29.664325   56616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:49:29.671243   56616 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:49:29.671303   56616 kubeadm.go:404] StartCluster: {Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:49:29.671391   56616 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:49:29.671456   56616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:49:29.719205   56616 cri.go:89] found id: ""
	I0229 18:49:29.719276   56616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:49:29.731745   56616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:49:29.742889   56616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:29.755399   56616 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:29.755457   56616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:49:29.814723   56616 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:49:29.814800   56616 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:49:29.971746   56616 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:49:29.971886   56616 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:49:29.972034   56616 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:49:30.252167   56616 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:49:28.572433   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:29.147556   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:29.571509   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:30.071581   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:30.255345   54985 kubeadm.go:1088] duration metric: took 12.883640109s to wait for elevateKubeSystemPrivileges.
	I0229 18:49:30.255372   54985 kubeadm.go:406] StartCluster complete in 25.237326714s
	I0229 18:49:30.255392   54985 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:30.255456   54985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:49:30.256879   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:30.257134   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:49:30.257619   54985 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:30.257690   54985 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:49:30.257770   54985 addons.go:69] Setting storage-provisioner=true in profile "flannel-387000"
	I0229 18:49:30.257837   54985 addons.go:234] Setting addon storage-provisioner=true in "flannel-387000"
	I0229 18:49:30.257881   54985 host.go:66] Checking if "flannel-387000" exists ...
	I0229 18:49:30.258156   54985 addons.go:69] Setting default-storageclass=true in profile "flannel-387000"
	I0229 18:49:30.258171   54985 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-387000"
	I0229 18:49:30.258646   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.258691   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.259051   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.259077   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.278275   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0229 18:49:30.278851   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.278975   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0229 18:49:30.279294   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.279523   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.279542   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.279869   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.280368   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.280404   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.280950   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.280967   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.281355   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.281574   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.284908   54985 addons.go:234] Setting addon default-storageclass=true in "flannel-387000"
	I0229 18:49:30.284946   54985 host.go:66] Checking if "flannel-387000" exists ...
	I0229 18:49:30.285335   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.285363   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.301797   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0229 18:49:30.302404   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.303039   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.303066   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.306458   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0229 18:49:30.306990   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.307006   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.307339   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.307517   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.307533   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.307975   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.308574   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.308623   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.309181   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:49:30.311670   54985 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:30.254939   56616 out.go:204]   - Generating certificates and keys ...
	I0229 18:49:30.255042   56616 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:49:30.255133   56616 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:49:30.766905   56616 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:49:30.313783   54985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:30.313802   54985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:49:30.313819   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:49:30.316849   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.317085   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:49:30.317110   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.317298   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:49:30.317478   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:49:30.317580   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:49:30.317664   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:49:30.329670   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0229 18:49:30.330177   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.330783   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.330806   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.331176   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.331384   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.333197   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:49:30.334826   54985 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:30.334843   54985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:49:30.334869   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:49:30.338144   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.338778   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:49:30.338801   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.338956   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:49:30.339157   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:49:30.339431   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:49:30.339660   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:49:30.502461   54985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:30.539109   54985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:30.554940   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:49:30.782362   54985 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-387000" context rescaled to 1 replicas
	I0229 18:49:30.782402   54985 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:49:30.783987   54985 out.go:177] * Verifying Kubernetes components...
	I0229 18:49:30.785300   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:31.601895   54985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099398912s)
	I0229 18:49:31.601940   54985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062804544s)
	I0229 18:49:31.601978   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.601990   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.601990   54985 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.047018605s)
	I0229 18:49:31.602013   54985 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 18:49:31.601947   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.602042   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.603410   54985 node_ready.go:35] waiting up to 15m0s for node "flannel-387000" to be "Ready" ...
	I0229 18:49:31.604154   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.604179   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.604183   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.604208   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.604218   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.604224   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.604226   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.604232   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.604240   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.604247   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.607823   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.607844   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.607926   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.607950   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.607911   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.607996   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.618857   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.618881   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.619130   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.619149   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.619162   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.620981   54985 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:49:31.033759   56616 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:49:31.120891   56616 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:49:31.463853   56616 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:49:31.551551   56616 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:49:31.551893   56616 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [bridge-387000 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0229 18:49:31.722990   56616 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:49:31.723158   56616 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [bridge-387000 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0229 18:49:31.825373   56616 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:49:32.063471   56616 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:49:32.222614   56616 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:49:32.223114   56616 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:49:32.510014   56616 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:49:32.655275   56616 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:49:32.784615   56616 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:49:33.064676   56616 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:49:33.065222   56616 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:49:33.070795   56616 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:49:31.622660   54985 addons.go:505] enable addons completed in 1.364975416s: enabled=[storage-provisioner default-storageclass]
	I0229 18:49:33.072609   56616 out.go:204]   - Booting up control plane ...
	I0229 18:49:33.072726   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:49:33.072814   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:49:33.073366   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:49:33.093436   56616 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:49:33.094135   56616 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:49:33.094181   56616 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:49:33.255460   56616 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:49:33.608730   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:36.109664   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:39.758816   56616 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504601 seconds
	I0229 18:49:39.758957   56616 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:49:39.776368   56616 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:49:40.309919   56616 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:49:40.310085   56616 kubeadm.go:322] [mark-control-plane] Marking the node bridge-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:49:40.825578   56616 kubeadm.go:322] [bootstrap-token] Using token: 48g59o.us88bsv20d2vcd89
	I0229 18:49:40.826978   56616 out.go:204]   - Configuring RBAC rules ...
	I0229 18:49:40.827126   56616 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:49:40.832889   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:49:40.847681   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:49:40.851826   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:49:40.860535   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:49:40.863938   56616 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:49:40.879602   56616 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:49:41.139767   56616 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:49:41.247367   56616 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:49:41.248543   56616 kubeadm.go:322] 
	I0229 18:49:41.248664   56616 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:49:41.248690   56616 kubeadm.go:322] 
	I0229 18:49:41.248783   56616 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:49:41.248792   56616 kubeadm.go:322] 
	I0229 18:49:41.248824   56616 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:49:41.248897   56616 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:49:41.248960   56616 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:49:41.248968   56616 kubeadm.go:322] 
	I0229 18:49:41.249052   56616 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:49:41.249061   56616 kubeadm.go:322] 
	I0229 18:49:41.249127   56616 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:49:41.249136   56616 kubeadm.go:322] 
	I0229 18:49:41.249198   56616 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:49:41.249301   56616 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:49:41.249386   56616 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:49:41.249395   56616 kubeadm.go:322] 
	I0229 18:49:41.249495   56616 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:49:41.249590   56616 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:49:41.249599   56616 kubeadm.go:322] 
	I0229 18:49:41.249698   56616 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 48g59o.us88bsv20d2vcd89 \
	I0229 18:49:41.249827   56616 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e \
	I0229 18:49:41.249854   56616 kubeadm.go:322] 	--control-plane 
	I0229 18:49:41.249864   56616 kubeadm.go:322] 
	I0229 18:49:41.249960   56616 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:49:41.249973   56616 kubeadm.go:322] 
	I0229 18:49:41.250084   56616 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 48g59o.us88bsv20d2vcd89 \
	I0229 18:49:41.250208   56616 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e 
	I0229 18:49:41.250696   56616 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:49:41.250727   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:49:41.252404   56616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:49:38.607932   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:41.107735   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:42.609430   54985 node_ready.go:49] node "flannel-387000" has status "Ready":"True"
	I0229 18:49:42.609457   54985 node_ready.go:38] duration metric: took 11.006003925s waiting for node "flannel-387000" to be "Ready" ...
	I0229 18:49:42.609471   54985 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:42.624116   54985 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:41.253727   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:49:41.269984   56616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:49:41.328740   56616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:49:41.328774   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:41.328809   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=bridge-387000 minikube.k8s.io/updated_at=2024_02_29T18_49_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:41.621377   56616 ops.go:34] apiserver oom_adj: -16
	I0229 18:49:41.621528   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:42.121610   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:42.621559   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:43.122111   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:43.622388   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.122162   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.622423   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:45.121523   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:45.622490   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.641882   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:47.130938   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:46.122475   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:46.622252   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:47.121950   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:47.622166   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:48.121847   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:48.621831   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.122205   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.621597   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:50.122536   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:50.622202   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.131339   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:51.135261   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:51.122456   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:51.621671   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:52.122623   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:52.622270   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:53.122350   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:53.622060   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:54.121825   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:54.257142   56616 kubeadm.go:1088] duration metric: took 12.928415741s to wait for elevateKubeSystemPrivileges.
	I0229 18:49:54.257178   56616 kubeadm.go:406] StartCluster complete in 24.585883521s
	I0229 18:49:54.257204   56616 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:54.257277   56616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:49:54.258372   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:54.258640   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:49:54.258784   56616 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:49:54.258852   56616 config.go:182] Loaded profile config "bridge-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:54.258864   56616 addons.go:69] Setting storage-provisioner=true in profile "bridge-387000"
	I0229 18:49:54.258895   56616 addons.go:234] Setting addon storage-provisioner=true in "bridge-387000"
	I0229 18:49:54.258901   56616 addons.go:69] Setting default-storageclass=true in profile "bridge-387000"
	I0229 18:49:54.258931   56616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-387000"
	I0229 18:49:54.258950   56616 host.go:66] Checking if "bridge-387000" exists ...
	I0229 18:49:54.259398   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.259398   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.259445   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.259466   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.274567   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0229 18:49:54.277110   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.277767   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.277790   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.278206   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.278446   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.278866   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0229 18:49:54.279297   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.279768   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.279792   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.280118   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.280678   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.280726   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.281973   56616 addons.go:234] Setting addon default-storageclass=true in "bridge-387000"
	I0229 18:49:54.282013   56616 host.go:66] Checking if "bridge-387000" exists ...
	I0229 18:49:54.282392   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.282445   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.295870   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0229 18:49:54.296283   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.296777   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.296801   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.297117   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.297321   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.299340   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:54.301174   56616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:54.301561   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0229 18:49:54.302562   56616 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:54.302576   56616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:49:54.302594   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:54.303053   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.303865   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.303882   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.304245   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.304750   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.304780   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.306137   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.306593   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:54.306618   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.306870   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:54.307074   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:54.307243   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:54.307481   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:54.319774   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0229 18:49:54.320146   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.320676   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.320699   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.320988   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.321177   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.322823   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:54.323092   56616 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:54.323118   56616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:49:54.323138   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:54.325627   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.326037   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:54.326059   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.326323   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:54.326509   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:54.326717   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:54.326866   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:54.474211   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:49:54.543120   56616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:54.565025   56616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:54.763970   56616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-387000" context rescaled to 1 replicas
	I0229 18:49:54.764002   56616 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:49:54.765491   56616 out.go:177] * Verifying Kubernetes components...
	I0229 18:49:54.766776   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:56.093245   56616 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.619001058s)
	I0229 18:49:56.093271   56616 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0229 18:49:56.244809   56616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.679752426s)
	I0229 18:49:56.244861   56616 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.478052227s)
	I0229 18:49:56.244872   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.244885   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245061   56616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.701909954s)
	I0229 18:49:56.245092   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245103   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245193   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245254   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245272   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245286   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245296   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245387   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245418   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245425   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245433   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245439   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245527   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245550   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245557   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245677   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245753   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245810   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.246815   56616 node_ready.go:35] waiting up to 15m0s for node "bridge-387000" to be "Ready" ...
	I0229 18:49:56.262485   56616 node_ready.go:49] node "bridge-387000" has status "Ready":"True"
	I0229 18:49:56.262508   56616 node_ready.go:38] duration metric: took 15.661393ms waiting for node "bridge-387000" to be "Ready" ...
	I0229 18:49:56.262519   56616 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:56.269209   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.269249   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.269550   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.269585   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.269596   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.271325   56616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:49:53.636200   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:56.135510   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:56.274698   56616 addons.go:505] enable addons completed in 2.015917545s: enabled=[storage-provisioner default-storageclass]
	I0229 18:49:56.273154   56616 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:56.777557   56616 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6h7vf" not found
	I0229 18:49:56.777590   56616 pod_ready.go:81] duration metric: took 502.858226ms waiting for pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace to be "Ready" ...
	E0229 18:49:56.777602   56616 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6h7vf" not found
	I0229 18:49:56.777610   56616 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:58.784721   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:58.633672   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:59.646464   54985 pod_ready.go:92] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.646494   54985 pod_ready.go:81] duration metric: took 17.022352449s waiting for pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.646508   54985 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.660404   54985 pod_ready.go:92] pod "etcd-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.660434   54985 pod_ready.go:81] duration metric: took 13.918303ms waiting for pod "etcd-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.660448   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.668034   54985 pod_ready.go:92] pod "kube-apiserver-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.668068   54985 pod_ready.go:81] duration metric: took 7.603659ms waiting for pod "kube-apiserver-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.668081   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.675598   54985 pod_ready.go:92] pod "kube-controller-manager-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.675622   54985 pod_ready.go:81] duration metric: took 7.532168ms waiting for pod "kube-controller-manager-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.675635   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9lqms" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.684548   54985 pod_ready.go:92] pod "kube-proxy-9lqms" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.684565   54985 pod_ready.go:81] duration metric: took 8.922978ms waiting for pod "kube-proxy-9lqms" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.684573   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:00.028950   54985 pod_ready.go:92] pod "kube-scheduler-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:00.028971   54985 pod_ready.go:81] duration metric: took 344.392651ms waiting for pod "kube-scheduler-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:00.028982   54985 pod_ready.go:38] duration metric: took 17.419480623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:50:00.029001   54985 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:50:00.029056   54985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:50:00.045556   54985 api_server.go:72] duration metric: took 29.263117975s to wait for apiserver process to appear ...
	I0229 18:50:00.045578   54985 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:50:00.045596   54985 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0229 18:50:00.055284   54985 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0229 18:50:00.056565   54985 api_server.go:141] control plane version: v1.28.4
	I0229 18:50:00.056586   54985 api_server.go:131] duration metric: took 11.003224ms to wait for apiserver health ...
	I0229 18:50:00.056594   54985 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:50:00.231132   54985 system_pods.go:59] 7 kube-system pods found
	I0229 18:50:00.231160   54985 system_pods.go:61] "coredns-5dd5756b68-qxt8h" [91c2382c-26ba-4455-8de8-609d87672c39] Running
	I0229 18:50:00.231165   54985 system_pods.go:61] "etcd-flannel-387000" [22a964de-c428-4fb8-8838-84573dcdce1a] Running
	I0229 18:50:00.231169   54985 system_pods.go:61] "kube-apiserver-flannel-387000" [03e54dd2-7b60-453a-990b-3645f0bf3963] Running
	I0229 18:50:00.231173   54985 system_pods.go:61] "kube-controller-manager-flannel-387000" [f912185e-07ba-4237-b0b6-82afb0a8eb0c] Running
	I0229 18:50:00.231176   54985 system_pods.go:61] "kube-proxy-9lqms" [cf865127-44ac-4dbb-b8d9-2e94bc3129bd] Running
	I0229 18:50:00.231179   54985 system_pods.go:61] "kube-scheduler-flannel-387000" [899d65db-9a8e-47ae-81bf-efffd7c9b62a] Running
	I0229 18:50:00.231185   54985 system_pods.go:61] "storage-provisioner" [b7d6d993-2e51-4ddd-a08c-3ee2ffd13c11] Running
	I0229 18:50:00.231191   54985 system_pods.go:74] duration metric: took 174.59191ms to wait for pod list to return data ...
	I0229 18:50:00.231202   54985 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:50:00.427825   54985 default_sa.go:45] found service account: "default"
	I0229 18:50:00.427848   54985 default_sa.go:55] duration metric: took 196.638007ms for default service account to be created ...
	I0229 18:50:00.427856   54985 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:50:00.630992   54985 system_pods.go:86] 7 kube-system pods found
	I0229 18:50:00.631017   54985 system_pods.go:89] "coredns-5dd5756b68-qxt8h" [91c2382c-26ba-4455-8de8-609d87672c39] Running
	I0229 18:50:00.631023   54985 system_pods.go:89] "etcd-flannel-387000" [22a964de-c428-4fb8-8838-84573dcdce1a] Running
	I0229 18:50:00.631033   54985 system_pods.go:89] "kube-apiserver-flannel-387000" [03e54dd2-7b60-453a-990b-3645f0bf3963] Running
	I0229 18:50:00.631037   54985 system_pods.go:89] "kube-controller-manager-flannel-387000" [f912185e-07ba-4237-b0b6-82afb0a8eb0c] Running
	I0229 18:50:00.631041   54985 system_pods.go:89] "kube-proxy-9lqms" [cf865127-44ac-4dbb-b8d9-2e94bc3129bd] Running
	I0229 18:50:00.631044   54985 system_pods.go:89] "kube-scheduler-flannel-387000" [899d65db-9a8e-47ae-81bf-efffd7c9b62a] Running
	I0229 18:50:00.631048   54985 system_pods.go:89] "storage-provisioner" [b7d6d993-2e51-4ddd-a08c-3ee2ffd13c11] Running
	I0229 18:50:00.631054   54985 system_pods.go:126] duration metric: took 203.193764ms to wait for k8s-apps to be running ...
	I0229 18:50:00.631060   54985 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:50:00.631100   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:50:00.647362   54985 system_svc.go:56] duration metric: took 16.295671ms WaitForService to wait for kubelet.
	I0229 18:50:00.647389   54985 kubeadm.go:581] duration metric: took 29.864953234s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:50:00.647411   54985 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:50:00.828295   54985 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:50:00.828327   54985 node_conditions.go:123] node cpu capacity is 2
	I0229 18:50:00.828337   54985 node_conditions.go:105] duration metric: took 180.921273ms to run NodePressure ...
	I0229 18:50:00.828349   54985 start.go:228] waiting for startup goroutines ...
	I0229 18:50:00.828354   54985 start.go:233] waiting for cluster config update ...
	I0229 18:50:00.828363   54985 start.go:242] writing updated cluster config ...
	I0229 18:50:00.828577   54985 ssh_runner.go:195] Run: rm -f paused
	I0229 18:50:00.875485   54985 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:50:00.877652   54985 out.go:177] * Done! kubectl is now configured to use "flannel-387000" cluster and "default" namespace by default
	I0229 18:50:00.784892   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:03.284667   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:05.785478   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:07.787316   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:10.284704   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:12.785430   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:15.284253   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:17.287335   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:19.784091   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:22.284722   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:24.286943   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:26.785520   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:28.785970   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:30.786304   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:33.285494   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:35.284449   56616 pod_ready.go:92] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.284470   56616 pod_ready.go:81] duration metric: took 38.506852963s waiting for pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.284479   56616 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.289780   56616 pod_ready.go:92] pod "etcd-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.289799   56616 pod_ready.go:81] duration metric: took 5.315104ms waiting for pod "etcd-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.289807   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.294824   56616 pod_ready.go:92] pod "kube-apiserver-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.294842   56616 pod_ready.go:81] duration metric: took 5.028182ms waiting for pod "kube-apiserver-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.294852   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.299640   56616 pod_ready.go:92] pod "kube-controller-manager-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.299654   56616 pod_ready.go:81] duration metric: took 4.795712ms waiting for pod "kube-controller-manager-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.299661   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-mkwsw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.304614   56616 pod_ready.go:92] pod "kube-proxy-mkwsw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.304626   56616 pod_ready.go:81] duration metric: took 4.960046ms waiting for pod "kube-proxy-mkwsw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.304633   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.681642   56616 pod_ready.go:92] pod "kube-scheduler-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.681664   56616 pod_ready.go:81] duration metric: took 377.024979ms waiting for pod "kube-scheduler-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.681673   56616 pod_ready.go:38] duration metric: took 39.41914281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:50:35.681686   56616 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:50:35.681729   56616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:50:35.698571   56616 api_server.go:72] duration metric: took 40.934535224s to wait for apiserver process to appear ...
	I0229 18:50:35.698596   56616 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:50:35.698612   56616 api_server.go:253] Checking apiserver healthz at https://192.168.72.206:8443/healthz ...
	I0229 18:50:35.703217   56616 api_server.go:279] https://192.168.72.206:8443/healthz returned 200:
	ok
	I0229 18:50:35.704671   56616 api_server.go:141] control plane version: v1.28.4
	I0229 18:50:35.704693   56616 api_server.go:131] duration metric: took 6.09165ms to wait for apiserver health ...
	I0229 18:50:35.704700   56616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:50:35.884622   56616 system_pods.go:59] 7 kube-system pods found
	I0229 18:50:35.884650   56616 system_pods.go:61] "coredns-5dd5756b68-hpkhw" [451828e2-19b3-4425-b363-75fffabf5390] Running
	I0229 18:50:35.884654   56616 system_pods.go:61] "etcd-bridge-387000" [8f1f0795-62bc-4013-be47-19f384d6457e] Running
	I0229 18:50:35.884658   56616 system_pods.go:61] "kube-apiserver-bridge-387000" [8c7bd96b-9ce4-4036-9b1d-afd35eb17b6a] Running
	I0229 18:50:35.884661   56616 system_pods.go:61] "kube-controller-manager-bridge-387000" [152cc6f1-67ff-4972-84ab-8a09faac9c4d] Running
	I0229 18:50:35.884664   56616 system_pods.go:61] "kube-proxy-mkwsw" [8dff43d1-caa4-4fea-ae29-cc3d55c585f4] Running
	I0229 18:50:35.884666   56616 system_pods.go:61] "kube-scheduler-bridge-387000" [66a3a2c5-c283-4afb-9124-3e2242ab2cab] Running
	I0229 18:50:35.884669   56616 system_pods.go:61] "storage-provisioner" [b7deeece-3360-41fd-9102-5ff10007f1e5] Running
	I0229 18:50:35.884680   56616 system_pods.go:74] duration metric: took 179.975669ms to wait for pod list to return data ...
	I0229 18:50:35.884687   56616 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:50:36.082244   56616 default_sa.go:45] found service account: "default"
	I0229 18:50:36.082267   56616 default_sa.go:55] duration metric: took 197.571615ms for default service account to be created ...
	I0229 18:50:36.082274   56616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:50:36.286398   56616 system_pods.go:86] 7 kube-system pods found
	I0229 18:50:36.286426   56616 system_pods.go:89] "coredns-5dd5756b68-hpkhw" [451828e2-19b3-4425-b363-75fffabf5390] Running
	I0229 18:50:36.286432   56616 system_pods.go:89] "etcd-bridge-387000" [8f1f0795-62bc-4013-be47-19f384d6457e] Running
	I0229 18:50:36.286436   56616 system_pods.go:89] "kube-apiserver-bridge-387000" [8c7bd96b-9ce4-4036-9b1d-afd35eb17b6a] Running
	I0229 18:50:36.286440   56616 system_pods.go:89] "kube-controller-manager-bridge-387000" [152cc6f1-67ff-4972-84ab-8a09faac9c4d] Running
	I0229 18:50:36.286447   56616 system_pods.go:89] "kube-proxy-mkwsw" [8dff43d1-caa4-4fea-ae29-cc3d55c585f4] Running
	I0229 18:50:36.286452   56616 system_pods.go:89] "kube-scheduler-bridge-387000" [66a3a2c5-c283-4afb-9124-3e2242ab2cab] Running
	I0229 18:50:36.286456   56616 system_pods.go:89] "storage-provisioner" [b7deeece-3360-41fd-9102-5ff10007f1e5] Running
	I0229 18:50:36.286462   56616 system_pods.go:126] duration metric: took 204.182782ms to wait for k8s-apps to be running ...
	I0229 18:50:36.286468   56616 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:50:36.286508   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:50:36.305908   56616 system_svc.go:56] duration metric: took 19.43363ms WaitForService to wait for kubelet.
	I0229 18:50:36.305933   56616 kubeadm.go:581] duration metric: took 41.541901185s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:50:36.305950   56616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:50:36.482097   56616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:50:36.482123   56616 node_conditions.go:123] node cpu capacity is 2
	I0229 18:50:36.482135   56616 node_conditions.go:105] duration metric: took 176.181312ms to run NodePressure ...
	I0229 18:50:36.482145   56616 start.go:228] waiting for startup goroutines ...
	I0229 18:50:36.482151   56616 start.go:233] waiting for cluster config update ...
	I0229 18:50:36.482161   56616 start.go:242] writing updated cluster config ...
	I0229 18:50:36.482394   56616 ssh_runner.go:195] Run: rm -f paused
	I0229 18:50:36.531423   56616 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:50:36.533299   56616 out.go:177] * Done! kubectl is now configured to use "bridge-387000" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.986741472Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987004867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987056148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987317854Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987441432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987501059Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987549532Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987598311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987871455Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987985628Z" level=info msg="Connect containerd service"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988045901Z" level=info msg="using legacy CRI server"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988078198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988266930Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989037697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989295153Z" level=info msg="Start subscribing containerd event"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989377058Z" level=info msg="Start recovering state"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990279282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990471179Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034388260Z" level=info msg="Start event monitor"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034498239Z" level=info msg="Start snapshots syncer"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034517505Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034527207Z" level=info msg="Start streaming server"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034557306Z" level=info msg="containerd successfully booted in 0.090065s"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052015588Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052339514Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052023] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.656777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.325515] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730699] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.588499] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +0.058606] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067411] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.168224] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
	[  +0.171213] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.318931] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +5.900853] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.061557] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.980225] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.271847] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[Feb29 18:42] systemd-fstab-generator[7934]: Ignoring "noauto" option for root device
	[  +0.070905] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9618]: Ignoring "noauto" option for root device
	[  +0.076239] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:55:45 up 17 min,  0 users,  load average: 0.13, 0.27, 0.18
	Linux old-k8s-version-561577 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 18:55:43 old-k8s-version-561577 kubelet[18872]: F0229 18:55:43.697553   18872 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:55:43 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:55:43 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:55:44 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 876.
	Feb 29 18:55:44 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:55:44 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: I0229 18:55:44.452524   18882 server.go:410] Version: v1.16.0
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: I0229 18:55:44.452822   18882 plugins.go:100] No cloud provider specified.
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: I0229 18:55:44.452833   18882 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: I0229 18:55:44.455072   18882 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: W0229 18:55:44.456215   18882 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:55:44 old-k8s-version-561577 kubelet[18882]: F0229 18:55:44.456290   18882 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:55:44 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:55:44 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:55:45 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 877.
	Feb 29 18:55:45 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:55:45 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: I0229 18:55:45.158170   18915 server.go:410] Version: v1.16.0
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: I0229 18:55:45.158966   18915 plugins.go:100] No cloud provider specified.
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: I0229 18:55:45.159068   18915 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: I0229 18:55:45.161897   18915 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: W0229 18:55:45.163597   18915 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:55:45 old-k8s-version-561577 kubelet[18915]: F0229 18:55:45.163695   18915 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:55:45 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:55:45 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (240.330894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-561577" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:47.198892   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:55:57.439062   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:05.088294   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 18:56:05.740070   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:17.920121   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:19.950570   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 18:56:20.603482   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:22.822769   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:28.014709   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:33.751240   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:47.633939   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:53.781455   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:55.697939   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:56:58.881296   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:57:44.743405   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:57:46.915388   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:58:14.599318   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/calico-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:58:20.802473   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:58:21.897846   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:58:34.028920   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:58:49.580595   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/custom-flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:59:09.939025   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:59:37.621611   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/enable-default-cni-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 18:59:42.039458   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:00:00.899310   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:00:28.583910   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:00:36.959206   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:01:04.643438   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:01:19.950482   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/kindnet-387000/client.crt: no such file or directory
E0229 19:01:20.603313   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:01:28.014612   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/auto-387000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
E0229 19:01:33.750855   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.66:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.66:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (244.71228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-561577" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-561577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-561577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.733µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-561577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (225.29263ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-561577 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-387000 sudo iptables                       | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo docker                         | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo cat                            | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC |                     |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo                                | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo find                           | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-387000 sudo crio                           | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-387000                                     | bridge-387000 | jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:48:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:48:50.773132   56616 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:48:50.773365   56616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:48:50.773374   56616 out.go:304] Setting ErrFile to fd 2...
	I0229 18:48:50.773378   56616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:48:50.773574   56616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:48:50.774145   56616 out.go:298] Setting JSON to false
	I0229 18:48:50.775813   56616 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5472,"bootTime":1709227059,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:48:50.776131   56616 start.go:139] virtualization: kvm guest
	I0229 18:48:50.778009   56616 out.go:177] * [bridge-387000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:48:50.779099   56616 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:48:50.780171   56616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:48:50.779131   56616 notify.go:220] Checking for updates...
	I0229 18:48:50.782320   56616 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:48:50.783513   56616 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:50.784694   56616 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:48:50.785822   56616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:48:50.787580   56616 config.go:182] Loaded profile config "enable-default-cni-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:50.787729   56616 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:50.787857   56616 config.go:182] Loaded profile config "old-k8s-version-561577": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:48:50.787939   56616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:48:50.822724   56616 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:48:50.824108   56616 start.go:299] selected driver: kvm2
	I0229 18:48:50.824118   56616 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:48:50.824128   56616 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:48:50.824768   56616 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:48:50.824842   56616 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:48:50.839423   56616 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:48:50.839458   56616 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:48:50.839652   56616 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:48:50.839707   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:48:50.839719   56616 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:48:50.839730   56616 start_flags.go:323] config:
	{Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:48:50.839839   56616 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:48:50.842288   56616 out.go:177] * Starting control plane node bridge-387000 in cluster bridge-387000
	I0229 18:48:48.639911   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:51.137879   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:51.047516   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.048138   54985 main.go:141] libmachine: (flannel-387000) Found IP for machine: 192.168.50.138
	I0229 18:48:51.048163   54985 main.go:141] libmachine: (flannel-387000) Reserving static IP address...
	I0229 18:48:51.048184   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has current primary IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.048529   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find host DHCP lease matching {name: "flannel-387000", mac: "52:54:00:39:87:55", ip: "192.168.50.138"} in network mk-flannel-387000
	I0229 18:48:51.120924   54985 main.go:141] libmachine: (flannel-387000) DBG | Getting to WaitForSSH function...
	I0229 18:48:51.120963   54985 main.go:141] libmachine: (flannel-387000) Reserved static IP address: 192.168.50.138
	I0229 18:48:51.120976   54985 main.go:141] libmachine: (flannel-387000) Waiting for SSH to be available...
	I0229 18:48:51.123675   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:51.123962   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000
	I0229 18:48:51.123987   54985 main.go:141] libmachine: (flannel-387000) DBG | unable to find defined IP address of network mk-flannel-387000 interface with MAC address 52:54:00:39:87:55
	I0229 18:48:51.124162   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH client type: external
	I0229 18:48:51.124187   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa (-rw-------)
	I0229 18:48:51.124218   54985 main.go:141] libmachine: (flannel-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:48:51.124230   54985 main.go:141] libmachine: (flannel-387000) DBG | About to run SSH command:
	I0229 18:48:51.124247   54985 main.go:141] libmachine: (flannel-387000) DBG | exit 0
	I0229 18:48:51.127797   54985 main.go:141] libmachine: (flannel-387000) DBG | SSH cmd err, output: exit status 255: 
	I0229 18:48:51.127823   54985 main.go:141] libmachine: (flannel-387000) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 18:48:51.127834   54985 main.go:141] libmachine: (flannel-387000) DBG | command : exit 0
	I0229 18:48:51.127845   54985 main.go:141] libmachine: (flannel-387000) DBG | err     : exit status 255
	I0229 18:48:51.127856   54985 main.go:141] libmachine: (flannel-387000) DBG | output  : 
	I0229 18:48:50.843585   56616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:48:50.843612   56616 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 18:48:50.843618   56616 cache.go:56] Caching tarball of preloaded images
	I0229 18:48:50.843706   56616 preload.go:174] Found /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:48:50.843719   56616 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 18:48:50.843791   56616 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json ...
	I0229 18:48:50.843806   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json: {Name:mk17c54d02704fa964d1848bcdb1d8f1ad0d67be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:48:50.843941   56616 start.go:365] acquiring machines lock for bridge-387000: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:48:55.279726   56616 start.go:369] acquired machines lock for "bridge-387000" in 4.43574817s
	I0229 18:48:55.279785   56616 start.go:93] Provisioning new machine with config: &{Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:48:55.279947   56616 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:48:55.282286   56616 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 18:48:55.282483   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:48:55.282528   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:48:55.299090   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0229 18:48:55.299478   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:48:55.300044   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:48:55.300064   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:48:55.300367   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:48:55.300539   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:48:55.300689   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:48:55.300847   56616 start.go:159] libmachine.API.Create for "bridge-387000" (driver="kvm2")
	I0229 18:48:55.300887   56616 client.go:168] LocalClient.Create starting
	I0229 18:48:55.300919   56616 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
	I0229 18:48:55.300957   56616 main.go:141] libmachine: Decoding PEM data...
	I0229 18:48:55.300978   56616 main.go:141] libmachine: Parsing certificate...
	I0229 18:48:55.301045   56616 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
	I0229 18:48:55.301069   56616 main.go:141] libmachine: Decoding PEM data...
	I0229 18:48:55.301092   56616 main.go:141] libmachine: Parsing certificate...
	I0229 18:48:55.301117   56616 main.go:141] libmachine: Running pre-create checks...
	I0229 18:48:55.301135   56616 main.go:141] libmachine: (bridge-387000) Calling .PreCreateCheck
	I0229 18:48:55.301462   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:48:55.301887   56616 main.go:141] libmachine: Creating machine...
	I0229 18:48:55.301907   56616 main.go:141] libmachine: (bridge-387000) Calling .Create
	I0229 18:48:55.302064   56616 main.go:141] libmachine: (bridge-387000) Creating KVM machine...
	I0229 18:48:55.303167   56616 main.go:141] libmachine: (bridge-387000) DBG | found existing default KVM network
	I0229 18:48:55.304288   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.304131   56679 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:28:6a} reservation:<nil>}
	I0229 18:48:55.305179   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.305108   56679 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:76:62} reservation:<nil>}
	I0229 18:48:55.306011   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.305938   56679 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:cd:16} reservation:<nil>}
	I0229 18:48:55.307171   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.307074   56679 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a7980}
	I0229 18:48:55.312399   56616 main.go:141] libmachine: (bridge-387000) DBG | trying to create private KVM network mk-bridge-387000 192.168.72.0/24...
	I0229 18:48:55.388048   56616 main.go:141] libmachine: (bridge-387000) DBG | private KVM network mk-bridge-387000 192.168.72.0/24 created
	I0229 18:48:55.388081   56616 main.go:141] libmachine: (bridge-387000) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 ...
	I0229 18:48:55.388090   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.388013   56679 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:55.388152   56616 main.go:141] libmachine: (bridge-387000) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:48:55.388185   56616 main.go:141] libmachine: (bridge-387000) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:48:55.672301   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.672088   56679 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa...
	I0229 18:48:53.138358   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:55.637973   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:54.128463   54985 main.go:141] libmachine: (flannel-387000) DBG | Getting to WaitForSSH function...
	I0229 18:48:54.130836   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.131203   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.131233   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.131376   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH client type: external
	I0229 18:48:54.131405   54985 main.go:141] libmachine: (flannel-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa (-rw-------)
	I0229 18:48:54.131431   54985 main.go:141] libmachine: (flannel-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:48:54.131445   54985 main.go:141] libmachine: (flannel-387000) DBG | About to run SSH command:
	I0229 18:48:54.131459   54985 main.go:141] libmachine: (flannel-387000) DBG | exit 0
	I0229 18:48:54.254430   54985 main.go:141] libmachine: (flannel-387000) DBG | SSH cmd err, output: <nil>: 
	I0229 18:48:54.254741   54985 main.go:141] libmachine: (flannel-387000) KVM machine creation complete!
	I0229 18:48:54.255063   54985 main.go:141] libmachine: (flannel-387000) Calling .GetConfigRaw
	I0229 18:48:54.255534   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:54.255734   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:54.255907   54985 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:48:54.255920   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:48:54.257161   54985 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:48:54.257175   54985 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:48:54.257180   54985 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:48:54.257186   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.259535   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.259914   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.259945   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.260057   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.260218   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.260371   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.260533   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.260687   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.260872   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.260882   54985 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:48:54.362305   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:48:54.362333   54985 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:48:54.362344   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.364852   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.365248   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.365284   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.365411   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.365605   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.365765   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.365910   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.366047   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.366217   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.366228   54985 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:48:54.476176   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:48:54.476231   54985 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:48:54.476237   54985 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:48:54.476244   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.476456   54985 buildroot.go:166] provisioning hostname "flannel-387000"
	I0229 18:48:54.476474   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.476653   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.479228   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.479574   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.479601   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.479814   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.480005   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.480193   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.480339   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.480513   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.480683   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.480694   54985 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-387000 && echo "flannel-387000" | sudo tee /etc/hostname
	I0229 18:48:54.599481   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-387000
	
	I0229 18:48:54.599508   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.602400   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.602741   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.602769   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.603004   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.603162   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.603382   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.603513   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.603668   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:54.603855   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:54.603878   54985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:48:54.717276   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:48:54.717305   54985 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:48:54.717329   54985 buildroot.go:174] setting up certificates
	I0229 18:48:54.717341   54985 provision.go:83] configureAuth start
	I0229 18:48:54.717368   54985 main.go:141] libmachine: (flannel-387000) Calling .GetMachineName
	I0229 18:48:54.717639   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:54.720649   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.721011   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.721036   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.721231   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.723814   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.724159   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.724189   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.724351   54985 provision.go:138] copyHostCerts
	I0229 18:48:54.724411   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:48:54.724427   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:48:54.724488   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:48:54.724578   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:48:54.724585   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:48:54.724608   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:48:54.724694   54985 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:48:54.724701   54985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:48:54.724724   54985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:48:54.724811   54985 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.flannel-387000 san=[192.168.50.138 192.168.50.138 localhost 127.0.0.1 minikube flannel-387000]
	I0229 18:48:54.858082   54985 provision.go:172] copyRemoteCerts
	I0229 18:48:54.858139   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:48:54.858170   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:54.860744   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.861068   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:54.861093   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:54.861264   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:54.861446   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:54.861601   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:54.861790   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:54.950399   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:48:54.978245   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0229 18:48:55.009882   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:48:55.035218   54985 provision.go:86] duration metric: configureAuth took 317.866623ms
	I0229 18:48:55.035242   54985 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:48:55.035401   54985 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:48:55.035426   54985 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:48:55.035442   54985 main.go:141] libmachine: (flannel-387000) Calling .GetURL
	I0229 18:48:55.036662   54985 main.go:141] libmachine: (flannel-387000) DBG | Using libvirt version 6000000
	I0229 18:48:55.038759   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.039104   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.039134   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.039285   54985 main.go:141] libmachine: Docker is up and running!
	I0229 18:48:55.039304   54985 main.go:141] libmachine: Reticulating splines...
	I0229 18:48:55.039312   54985 client.go:171] LocalClient.Create took 31.777126651s
	I0229 18:48:55.039337   54985 start.go:167] duration metric: libmachine.API.Create for "flannel-387000" took 31.77720499s
	I0229 18:48:55.039347   54985 start.go:300] post-start starting for "flannel-387000" (driver="kvm2")
	I0229 18:48:55.039355   54985 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:48:55.039370   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.039619   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:48:55.039641   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.041889   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.042187   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.042223   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.042360   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.042583   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.042721   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.042836   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.126438   54985 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:48:55.131071   54985 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:48:55.131095   54985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:48:55.131163   54985 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:48:55.131253   54985 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:48:55.131369   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:48:55.143382   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:48:55.170742   54985 start.go:303] post-start completed in 131.386068ms
	I0229 18:48:55.170782   54985 main.go:141] libmachine: (flannel-387000) Calling .GetConfigRaw
	I0229 18:48:55.171346   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:55.174022   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.174346   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.174380   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.174636   54985 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/config.json ...
	I0229 18:48:55.174797   54985 start.go:128] duration metric: createHost completed in 31.931014733s
	I0229 18:48:55.174818   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.176833   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.177128   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.177153   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.177323   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.177509   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.177663   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.177824   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.177959   54985 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:55.178180   54985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.138 22 <nil> <nil>}
	I0229 18:48:55.178197   54985 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:48:55.279538   54985 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232535.269815607
	
	I0229 18:48:55.279559   54985 fix.go:206] guest clock: 1709232535.269815607
	I0229 18:48:55.279568   54985 fix.go:219] Guest: 2024-02-29 18:48:55.269815607 +0000 UTC Remote: 2024-02-29 18:48:55.174807849 +0000 UTC m=+32.064580051 (delta=95.007758ms)
	I0229 18:48:55.279626   54985 fix.go:190] guest clock delta is within tolerance: 95.007758ms
	I0229 18:48:55.279634   54985 start.go:83] releasing machines lock for "flannel-387000", held for 32.035959699s
	I0229 18:48:55.279668   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.279936   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:55.282606   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.282973   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.282999   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.283205   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283675   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283842   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:48:55.283944   54985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:48:55.283988   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.284030   54985 ssh_runner.go:195] Run: cat /version.json
	I0229 18:48:55.284058   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:48:55.286624   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.286894   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287012   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.287034   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287208   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:55.287235   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.287241   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:55.287414   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.287416   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:48:55.287616   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.287618   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:48:55.287809   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:48:55.287814   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.287920   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:48:55.394397   54985 ssh_runner.go:195] Run: systemctl --version
	I0229 18:48:55.402146   54985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:48:55.411717   54985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:48:55.411800   54985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:48:55.442497   54985 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:48:55.442525   54985 start.go:475] detecting cgroup driver to use...
	I0229 18:48:55.442612   54985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:48:55.740595   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:48:55.757748   54985 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:48:55.757797   54985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:48:55.775921   54985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:48:55.793972   54985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:48:55.927103   54985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:48:56.065634   54985 docker.go:233] disabling docker service ...
	I0229 18:48:56.065711   54985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:48:56.082468   54985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:48:56.097030   54985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:48:56.267367   54985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:48:56.393663   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:48:56.409105   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:48:56.430079   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:48:56.442345   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:48:56.453625   54985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:48:56.453677   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:48:56.465097   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:56.476377   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:48:56.492078   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:56.505091   54985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:48:56.516674   54985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:48:56.527676   54985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:48:56.537464   54985 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:48:56.537515   54985 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:48:56.552483   54985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:48:56.562495   54985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:48:56.705808   54985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:48:56.737148   54985 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:48:56.737243   54985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:48:56.742880   54985 retry.go:31] will retry after 1.363497332s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:48:58.106643   54985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:48:58.112963   54985 start.go:543] Will wait 60s for crictl version
	I0229 18:48:58.113022   54985 ssh_runner.go:195] Run: which crictl
	I0229 18:48:58.117960   54985 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:48:58.158237   54985 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:48:58.158311   54985 ssh_runner.go:195] Run: containerd --version
	I0229 18:48:58.200231   54985 ssh_runner.go:195] Run: containerd --version
	I0229 18:48:58.230896   54985 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 18:48:55.905787   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.905640   56679 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/bridge-387000.rawdisk...
	I0229 18:48:55.905831   56616 main.go:141] libmachine: (bridge-387000) DBG | Writing magic tar header
	I0229 18:48:55.905845   56616 main.go:141] libmachine: (bridge-387000) DBG | Writing SSH key tar header
	I0229 18:48:55.905857   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:55.905790   56679 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 ...
	I0229 18:48:55.905964   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000 (perms=drwx------)
	I0229 18:48:55.905988   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000
	I0229 18:48:55.905996   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:48:55.906027   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
	I0229 18:48:55.906047   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:48:55.906069   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
	I0229 18:48:55.906086   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
	I0229 18:48:55.906099   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:48:55.906129   56616 main.go:141] libmachine: (bridge-387000) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:48:55.906148   56616 main.go:141] libmachine: (bridge-387000) Creating domain...
	I0229 18:48:55.906161   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
	I0229 18:48:55.906175   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:48:55.906183   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:48:55.906195   56616 main.go:141] libmachine: (bridge-387000) DBG | Checking permissions on dir: /home
	I0229 18:48:55.906221   56616 main.go:141] libmachine: (bridge-387000) DBG | Skipping /home - not owner
	I0229 18:48:55.907207   56616 main.go:141] libmachine: (bridge-387000) define libvirt domain using xml: 
	I0229 18:48:55.907227   56616 main.go:141] libmachine: (bridge-387000) <domain type='kvm'>
	I0229 18:48:55.907234   56616 main.go:141] libmachine: (bridge-387000)   <name>bridge-387000</name>
	I0229 18:48:55.907240   56616 main.go:141] libmachine: (bridge-387000)   <memory unit='MiB'>3072</memory>
	I0229 18:48:55.907245   56616 main.go:141] libmachine: (bridge-387000)   <vcpu>2</vcpu>
	I0229 18:48:55.907249   56616 main.go:141] libmachine: (bridge-387000)   <features>
	I0229 18:48:55.907255   56616 main.go:141] libmachine: (bridge-387000)     <acpi/>
	I0229 18:48:55.907261   56616 main.go:141] libmachine: (bridge-387000)     <apic/>
	I0229 18:48:55.907266   56616 main.go:141] libmachine: (bridge-387000)     <pae/>
	I0229 18:48:55.907273   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907281   56616 main.go:141] libmachine: (bridge-387000)   </features>
	I0229 18:48:55.907293   56616 main.go:141] libmachine: (bridge-387000)   <cpu mode='host-passthrough'>
	I0229 18:48:55.907304   56616 main.go:141] libmachine: (bridge-387000)   
	I0229 18:48:55.907314   56616 main.go:141] libmachine: (bridge-387000)   </cpu>
	I0229 18:48:55.907332   56616 main.go:141] libmachine: (bridge-387000)   <os>
	I0229 18:48:55.907364   56616 main.go:141] libmachine: (bridge-387000)     <type>hvm</type>
	I0229 18:48:55.907377   56616 main.go:141] libmachine: (bridge-387000)     <boot dev='cdrom'/>
	I0229 18:48:55.907386   56616 main.go:141] libmachine: (bridge-387000)     <boot dev='hd'/>
	I0229 18:48:55.907399   56616 main.go:141] libmachine: (bridge-387000)     <bootmenu enable='no'/>
	I0229 18:48:55.907412   56616 main.go:141] libmachine: (bridge-387000)   </os>
	I0229 18:48:55.907437   56616 main.go:141] libmachine: (bridge-387000)   <devices>
	I0229 18:48:55.907459   56616 main.go:141] libmachine: (bridge-387000)     <disk type='file' device='cdrom'>
	I0229 18:48:55.907481   56616 main.go:141] libmachine: (bridge-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/boot2docker.iso'/>
	I0229 18:48:55.907495   56616 main.go:141] libmachine: (bridge-387000)       <target dev='hdc' bus='scsi'/>
	I0229 18:48:55.907508   56616 main.go:141] libmachine: (bridge-387000)       <readonly/>
	I0229 18:48:55.907518   56616 main.go:141] libmachine: (bridge-387000)     </disk>
	I0229 18:48:55.907531   56616 main.go:141] libmachine: (bridge-387000)     <disk type='file' device='disk'>
	I0229 18:48:55.907557   56616 main.go:141] libmachine: (bridge-387000)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:48:55.907574   56616 main.go:141] libmachine: (bridge-387000)       <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/bridge-387000.rawdisk'/>
	I0229 18:48:55.907586   56616 main.go:141] libmachine: (bridge-387000)       <target dev='hda' bus='virtio'/>
	I0229 18:48:55.907594   56616 main.go:141] libmachine: (bridge-387000)     </disk>
	I0229 18:48:55.907602   56616 main.go:141] libmachine: (bridge-387000)     <interface type='network'>
	I0229 18:48:55.907612   56616 main.go:141] libmachine: (bridge-387000)       <source network='mk-bridge-387000'/>
	I0229 18:48:55.907623   56616 main.go:141] libmachine: (bridge-387000)       <model type='virtio'/>
	I0229 18:48:55.907631   56616 main.go:141] libmachine: (bridge-387000)     </interface>
	I0229 18:48:55.907642   56616 main.go:141] libmachine: (bridge-387000)     <interface type='network'>
	I0229 18:48:55.907655   56616 main.go:141] libmachine: (bridge-387000)       <source network='default'/>
	I0229 18:48:55.907670   56616 main.go:141] libmachine: (bridge-387000)       <model type='virtio'/>
	I0229 18:48:55.907680   56616 main.go:141] libmachine: (bridge-387000)     </interface>
	I0229 18:48:55.907690   56616 main.go:141] libmachine: (bridge-387000)     <serial type='pty'>
	I0229 18:48:55.907699   56616 main.go:141] libmachine: (bridge-387000)       <target port='0'/>
	I0229 18:48:55.907709   56616 main.go:141] libmachine: (bridge-387000)     </serial>
	I0229 18:48:55.907717   56616 main.go:141] libmachine: (bridge-387000)     <console type='pty'>
	I0229 18:48:55.907728   56616 main.go:141] libmachine: (bridge-387000)       <target type='serial' port='0'/>
	I0229 18:48:55.907746   56616 main.go:141] libmachine: (bridge-387000)     </console>
	I0229 18:48:55.907765   56616 main.go:141] libmachine: (bridge-387000)     <rng model='virtio'>
	I0229 18:48:55.907778   56616 main.go:141] libmachine: (bridge-387000)       <backend model='random'>/dev/random</backend>
	I0229 18:48:55.907790   56616 main.go:141] libmachine: (bridge-387000)     </rng>
	I0229 18:48:55.907820   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907837   56616 main.go:141] libmachine: (bridge-387000)     
	I0229 18:48:55.907848   56616 main.go:141] libmachine: (bridge-387000)   </devices>
	I0229 18:48:55.907861   56616 main.go:141] libmachine: (bridge-387000) </domain>
	I0229 18:48:55.907874   56616 main.go:141] libmachine: (bridge-387000) 
	I0229 18:48:55.986324   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:0b:a3:b8 in network default
	I0229 18:48:55.987007   56616 main.go:141] libmachine: (bridge-387000) Ensuring networks are active...
	I0229 18:48:55.987043   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:55.987631   56616 main.go:141] libmachine: (bridge-387000) Ensuring network default is active
	I0229 18:48:55.988035   56616 main.go:141] libmachine: (bridge-387000) Ensuring network mk-bridge-387000 is active
	I0229 18:48:55.988602   56616 main.go:141] libmachine: (bridge-387000) Getting domain xml...
	I0229 18:48:55.989337   56616 main.go:141] libmachine: (bridge-387000) Creating domain...
	I0229 18:48:57.278411   56616 main.go:141] libmachine: (bridge-387000) Waiting to get IP...
	I0229 18:48:57.279142   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.279606   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.279635   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.279570   56679 retry.go:31] will retry after 272.020032ms: waiting for machine to come up
	I0229 18:48:57.552974   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.553494   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.553524   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.553449   56679 retry.go:31] will retry after 361.14125ms: waiting for machine to come up
	I0229 18:48:57.916017   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:57.916519   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:57.916547   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:57.916480   56679 retry.go:31] will retry after 433.645136ms: waiting for machine to come up
	I0229 18:48:58.352062   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:58.352615   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:58.352648   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:58.352560   56679 retry.go:31] will retry after 586.599788ms: waiting for machine to come up
	I0229 18:48:58.940663   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:58.941363   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:58.941401   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:58.941267   56679 retry.go:31] will retry after 694.893907ms: waiting for machine to come up
	I0229 18:48:59.638320   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:48:59.639177   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:48:59.639638   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:48:59.639156   56679 retry.go:31] will retry after 616.373171ms: waiting for machine to come up
	I0229 18:49:00.256713   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:00.257280   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:00.257337   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:00.257213   56679 retry.go:31] will retry after 946.181658ms: waiting for machine to come up
	I0229 18:48:57.640616   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:00.142378   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:48:58.232255   54985 main.go:141] libmachine: (flannel-387000) Calling .GetIP
	I0229 18:48:58.235077   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:58.235503   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:48:58.235534   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:48:58.235706   54985 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 18:48:58.240459   54985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:48:58.254807   54985 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:48:58.254874   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:48:58.292420   54985 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:48:58.292504   54985 ssh_runner.go:195] Run: which lz4
	I0229 18:48:58.297357   54985 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:48:58.302502   54985 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:48:58.302535   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 18:49:00.245621   54985 containerd.go:548] Took 1.948289 seconds to copy over tarball
	I0229 18:49:00.245695   54985 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:49:03.216854   54985 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.97113447s)
	I0229 18:49:03.311255   54985 containerd.go:555] Took 3.065601 seconds to extract the tarball
	I0229 18:49:03.311279   54985 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:49:03.355068   54985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:03.482825   54985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:03.512774   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:03.555996   54985 retry.go:31] will retry after 374.597303ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:49:03Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:49:03.931703   54985 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:03.974702   54985 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:49:03.974727   54985 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:49:03.974783   54985 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:49:04.015211   54985 cni.go:84] Creating CNI manager for "flannel"
	I0229 18:49:04.015239   54985 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:49:04.015256   54985 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.138 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-387000 NodeName:flannel-387000 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:49:04.015364   54985 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-387000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:49:04.015429   54985 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=flannel-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:flannel-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
	I0229 18:49:04.015479   54985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:49:04.027273   54985 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:49:04.027349   54985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:49:04.038618   54985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 18:49:04.057180   54985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:49:04.075354   54985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 18:49:04.093446   54985 ssh_runner.go:195] Run: grep 192.168.50.138	control-plane.minikube.internal$ /etc/hosts
	I0229 18:49:04.097862   54985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:04.112661   54985 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000 for IP: 192.168.50.138
	I0229 18:49:04.112689   54985 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.112846   54985 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:49:04.112898   54985 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:49:04.112955   54985 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key
	I0229 18:49:04.112968   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt with IP's: []
	I0229 18:49:04.246708   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt ...
	I0229 18:49:04.246740   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.crt: {Name:mkd2ec537db5870bae60b08d4f72854668507412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.246931   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key ...
	I0229 18:49:04.246945   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/client.key: {Name:mk3766f09d804b8c79adb8c2906ce65c768652b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.247039   54985 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076
	I0229 18:49:04.247056   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 with IP's: [192.168.50.138 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:49:04.301273   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 ...
	I0229 18:49:04.301300   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076: {Name:mk7169c456aa9a4ecf986b00db44d47f2dc907ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.301465   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076 ...
	I0229 18:49:04.301481   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076: {Name:mk6393b99ec929fb754394229a6c7159a47bb763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.301569   54985 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt.150c6076 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt
	I0229 18:49:04.301660   54985 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key.150c6076 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key
	I0229 18:49:04.301754   54985 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key
	I0229 18:49:04.301769   54985 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt with IP's: []
	I0229 18:49:04.572734   54985 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt ...
	I0229 18:49:04.572764   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt: {Name:mk63899018d8766e5f7ceac2248de6529e432cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.572949   54985 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key ...
	I0229 18:49:04.572964   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key: {Name:mk348b42238503d8f73773c2471539498f37e200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:04.573159   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:49:04.573208   54985 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:49:04.573220   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:49:04.573257   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:49:04.573302   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:49:04.573337   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:49:04.573396   54985 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:04.574006   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:49:04.606754   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:49:04.632828   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:49:04.659118   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/flannel-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:49:04.686022   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:49:04.712702   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:49:04.739880   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:49:04.767247   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:49:04.797719   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:49:04.824352   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:49:04.850637   54985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:49:04.877085   54985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:49:04.895419   54985 ssh_runner.go:195] Run: openssl version
	I0229 18:49:04.901474   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:49:04.913206   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.918167   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.918233   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:04.924528   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:49:04.935997   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:49:04.951762   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.958212   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.958289   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:49:04.965363   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:49:04.976766   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:49:04.988980   54985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:49:04.994098   54985 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:49:04.994144   54985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:49:05.000698   54985 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:49:05.012981   54985 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:49:05.017989   54985 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:49:05.018048   54985 kubeadm.go:404] StartCluster: {Name:flannel-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:flannel-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:49:05.018149   54985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:49:05.018218   54985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:49:05.065799   54985 cri.go:89] found id: ""
	I0229 18:49:05.065931   54985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:49:05.076645   54985 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:49:05.087554   54985 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:05.098425   54985 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:05.098473   54985 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:49:05.165291   54985 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:49:05.165417   54985 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:49:05.325025   54985 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:49:05.325164   54985 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:49:05.325291   54985 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:49:05.561296   54985 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:49:01.204867   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:01.205369   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:01.205398   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:01.205320   56679 retry.go:31] will retry after 1.269210028s: waiting for machine to come up
	I0229 18:49:02.475729   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:02.476324   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:02.476372   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:02.476283   56679 retry.go:31] will retry after 1.35365046s: waiting for machine to come up
	I0229 18:49:03.831686   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:03.832193   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:03.832234   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:03.832163   56679 retry.go:31] will retry after 1.727519863s: waiting for machine to come up
	I0229 18:49:05.561673   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:05.562340   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:05.562365   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:05.562260   56679 retry.go:31] will retry after 1.769800655s: waiting for machine to come up
	I0229 18:49:02.668516   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:05.139882   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:05.564077   54985 out.go:204]   - Generating certificates and keys ...
	I0229 18:49:05.564186   54985 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:49:05.564302   54985 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:49:06.166833   54985 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:49:06.270209   54985 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:49:06.471361   54985 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:49:06.592112   54985 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:49:06.683086   54985 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:49:06.683244   54985 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [flannel-387000 localhost] and IPs [192.168.50.138 127.0.0.1 ::1]
	I0229 18:49:07.030753   54985 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:49:07.034577   54985 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [flannel-387000 localhost] and IPs [192.168.50.138 127.0.0.1 ::1]
	I0229 18:49:07.124341   54985 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:49:07.273168   54985 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:49:07.374288   54985 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:49:07.374643   54985 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:49:07.546728   54985 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:49:07.794758   54985 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:49:07.980088   54985 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:49:08.090238   54985 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:49:08.091197   54985 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:49:08.096118   54985 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:49:08.097933   54985 out.go:204]   - Booting up control plane ...
	I0229 18:49:08.098088   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:49:08.098178   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:49:08.098258   54985 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:49:08.120974   54985 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:49:08.122094   54985 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:49:08.122162   54985 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:49:07.637496   52876 pod_ready.go:102] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:08.139758   52876 pod_ready.go:92] pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.139786   52876 pod_ready.go:81] duration metric: took 40.009402771s waiting for pod "coredns-5dd5756b68-h7tnh" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.139801   52876 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.146783   52876 pod_ready.go:92] pod "etcd-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.146809   52876 pod_ready.go:81] duration metric: took 7.000584ms waiting for pod "etcd-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.146821   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.154109   52876 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.154169   52876 pod_ready.go:81] duration metric: took 7.338039ms waiting for pod "kube-apiserver-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.154189   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.160099   52876 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.160117   52876 pod_ready.go:81] duration metric: took 5.91974ms waiting for pod "kube-controller-manager-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.160130   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-g9phw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.166492   52876 pod_ready.go:92] pod "kube-proxy-g9phw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.166510   52876 pod_ready.go:81] duration metric: took 6.371891ms waiting for pod "kube-proxy-g9phw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.166521   52876 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.535773   52876 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:08.535801   52876 pod_ready.go:81] duration metric: took 369.272066ms waiting for pod "kube-scheduler-enable-default-cni-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:08.535814   52876 pod_ready.go:38] duration metric: took 40.417024581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:08.535834   52876 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:49:08.535895   52876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:49:08.557861   52876 api_server.go:72] duration metric: took 41.867654795s to wait for apiserver process to appear ...
	I0229 18:49:08.557884   52876 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:49:08.557903   52876 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8443/healthz ...
	I0229 18:49:08.564318   52876 api_server.go:279] https://192.168.61.38:8443/healthz returned 200:
	ok
	I0229 18:49:08.565970   52876 api_server.go:141] control plane version: v1.28.4
	I0229 18:49:08.565995   52876 api_server.go:131] duration metric: took 8.1035ms to wait for apiserver health ...
	I0229 18:49:08.566005   52876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:49:08.737834   52876 system_pods.go:59] 7 kube-system pods found
	I0229 18:49:08.737862   52876 system_pods.go:61] "coredns-5dd5756b68-h7tnh" [8378f6a0-03e8-45a0-822a-80b30208ddaa] Running
	I0229 18:49:08.737867   52876 system_pods.go:61] "etcd-enable-default-cni-387000" [37ef7d80-af93-422f-b188-1a817ae2d1e9] Running
	I0229 18:49:08.737874   52876 system_pods.go:61] "kube-apiserver-enable-default-cni-387000" [b5ee1f4d-681b-4788-ae39-d53a726f677c] Running
	I0229 18:49:08.737877   52876 system_pods.go:61] "kube-controller-manager-enable-default-cni-387000" [46c4c5d2-8656-42b9-8d43-a9006932902d] Running
	I0229 18:49:08.737880   52876 system_pods.go:61] "kube-proxy-g9phw" [f82d8097-989f-4e89-ad51-8ba63677e2f6] Running
	I0229 18:49:08.737884   52876 system_pods.go:61] "kube-scheduler-enable-default-cni-387000" [3ef0b6d3-9a60-4a3b-bccc-416fd65b2457] Running
	I0229 18:49:08.737886   52876 system_pods.go:61] "storage-provisioner" [0f5d9674-54f5-4e0b-9ba7-2dc1ee8477f9] Running
	I0229 18:49:08.737892   52876 system_pods.go:74] duration metric: took 171.881248ms to wait for pod list to return data ...
	I0229 18:49:08.737899   52876 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:49:08.935934   52876 default_sa.go:45] found service account: "default"
	I0229 18:49:08.935960   52876 default_sa.go:55] duration metric: took 198.054097ms for default service account to be created ...
	I0229 18:49:08.935969   52876 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:49:09.137364   52876 system_pods.go:86] 7 kube-system pods found
	I0229 18:49:09.137391   52876 system_pods.go:89] "coredns-5dd5756b68-h7tnh" [8378f6a0-03e8-45a0-822a-80b30208ddaa] Running
	I0229 18:49:09.137398   52876 system_pods.go:89] "etcd-enable-default-cni-387000" [37ef7d80-af93-422f-b188-1a817ae2d1e9] Running
	I0229 18:49:09.137403   52876 system_pods.go:89] "kube-apiserver-enable-default-cni-387000" [b5ee1f4d-681b-4788-ae39-d53a726f677c] Running
	I0229 18:49:09.137407   52876 system_pods.go:89] "kube-controller-manager-enable-default-cni-387000" [46c4c5d2-8656-42b9-8d43-a9006932902d] Running
	I0229 18:49:09.137410   52876 system_pods.go:89] "kube-proxy-g9phw" [f82d8097-989f-4e89-ad51-8ba63677e2f6] Running
	I0229 18:49:09.137415   52876 system_pods.go:89] "kube-scheduler-enable-default-cni-387000" [3ef0b6d3-9a60-4a3b-bccc-416fd65b2457] Running
	I0229 18:49:09.137418   52876 system_pods.go:89] "storage-provisioner" [0f5d9674-54f5-4e0b-9ba7-2dc1ee8477f9] Running
	I0229 18:49:09.137426   52876 system_pods.go:126] duration metric: took 201.45074ms to wait for k8s-apps to be running ...
	I0229 18:49:09.137434   52876 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:49:09.137485   52876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:09.153880   52876 system_svc.go:56] duration metric: took 16.434869ms WaitForService to wait for kubelet.
	I0229 18:49:09.153915   52876 kubeadm.go:581] duration metric: took 42.463713197s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:49:09.153938   52876 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:49:09.340471   52876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:49:09.340502   52876 node_conditions.go:123] node cpu capacity is 2
	I0229 18:49:09.340515   52876 node_conditions.go:105] duration metric: took 186.571706ms to run NodePressure ...
	I0229 18:49:09.340528   52876 start.go:228] waiting for startup goroutines ...
	I0229 18:49:09.340536   52876 start.go:233] waiting for cluster config update ...
	I0229 18:49:09.340549   52876 start.go:242] writing updated cluster config ...
	I0229 18:49:09.340800   52876 ssh_runner.go:195] Run: rm -f paused
	I0229 18:49:09.400557   52876 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:49:09.403260   52876 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-387000" cluster and "default" namespace by default
	I0229 18:49:07.333748   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:07.334330   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:07.334355   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:07.334288   56679 retry.go:31] will retry after 3.500057333s: waiting for machine to come up
	I0229 18:49:08.290891   54985 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:49:10.835648   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:10.836226   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:10.836253   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:10.836169   56679 retry.go:31] will retry after 3.989790949s: waiting for machine to come up
	I0229 18:49:14.828360   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:14.828762   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find current IP address of domain bridge-387000 in network mk-bridge-387000
	I0229 18:49:14.828794   56616 main.go:141] libmachine: (bridge-387000) DBG | I0229 18:49:14.828711   56679 retry.go:31] will retry after 4.551150284s: waiting for machine to come up
	I0229 18:49:14.792864   54985 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502792 seconds
	I0229 18:49:14.793025   54985 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:49:14.812677   54985 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:49:15.345116   54985 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:49:15.345370   54985 kubeadm.go:322] [mark-control-plane] Marking the node flannel-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:49:15.860380   54985 kubeadm.go:322] [bootstrap-token] Using token: fzw3xu.gjzf53iobyclbb8f
	I0229 18:49:15.862090   54985 out.go:204]   - Configuring RBAC rules ...
	I0229 18:49:15.862196   54985 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:49:15.883610   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:49:15.914462   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:49:15.930784   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:49:15.936444   54985 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:49:15.940632   54985 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:49:15.961392   54985 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:49:16.210930   54985 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:49:16.293240   54985 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:49:16.295565   54985 kubeadm.go:322] 
	I0229 18:49:16.295660   54985 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:49:16.295696   54985 kubeadm.go:322] 
	I0229 18:49:16.295824   54985 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:49:16.295843   54985 kubeadm.go:322] 
	I0229 18:49:16.295878   54985 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:49:16.295961   54985 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:49:16.296041   54985 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:49:16.296050   54985 kubeadm.go:322] 
	I0229 18:49:16.296127   54985 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:49:16.296137   54985 kubeadm.go:322] 
	I0229 18:49:16.296225   54985 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:49:16.296233   54985 kubeadm.go:322] 
	I0229 18:49:16.296303   54985 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:49:16.296408   54985 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:49:16.296505   54985 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:49:16.296519   54985 kubeadm.go:322] 
	I0229 18:49:16.297058   54985 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:49:16.297148   54985 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:49:16.297162   54985 kubeadm.go:322] 
	I0229 18:49:16.298485   54985 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fzw3xu.gjzf53iobyclbb8f \
	I0229 18:49:16.298641   54985 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e \
	I0229 18:49:16.298683   54985 kubeadm.go:322] 	--control-plane 
	I0229 18:49:16.298693   54985 kubeadm.go:322] 
	I0229 18:49:16.298817   54985 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:49:16.298828   54985 kubeadm.go:322] 
	I0229 18:49:16.298955   54985 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fzw3xu.gjzf53iobyclbb8f \
	I0229 18:49:16.299103   54985 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e 
	I0229 18:49:16.300315   54985 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:49:16.300349   54985 cni.go:84] Creating CNI manager for "flannel"
	I0229 18:49:16.301954   54985 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0229 18:49:16.303080   54985 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:49:16.315159   54985 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:49:16.315174   54985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4407 bytes)
	I0229 18:49:16.344591   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:49:17.371587   54985 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.026955401s)
	I0229 18:49:17.371662   54985 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:49:17.371774   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=flannel-387000 minikube.k8s.io/updated_at=2024_02_29T18_49_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:17.371780   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:17.403191   54985 ops.go:34] apiserver oom_adj: -16
	I0229 18:49:17.571494   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:18.072399   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.383783   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.384373   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has current primary IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.384400   56616 main.go:141] libmachine: (bridge-387000) Found IP for machine: 192.168.72.206
	I0229 18:49:19.384413   56616 main.go:141] libmachine: (bridge-387000) Reserving static IP address...
	I0229 18:49:19.384745   56616 main.go:141] libmachine: (bridge-387000) DBG | unable to find host DHCP lease matching {name: "bridge-387000", mac: "52:54:00:e7:3d:17", ip: "192.168.72.206"} in network mk-bridge-387000
	I0229 18:49:19.467084   56616 main.go:141] libmachine: (bridge-387000) Reserved static IP address: 192.168.72.206
	I0229 18:49:19.467116   56616 main.go:141] libmachine: (bridge-387000) DBG | Getting to WaitForSSH function...
	I0229 18:49:19.467131   56616 main.go:141] libmachine: (bridge-387000) Waiting for SSH to be available...
	I0229 18:49:19.470103   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.470604   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.470634   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.470883   56616 main.go:141] libmachine: (bridge-387000) DBG | Using SSH client type: external
	I0229 18:49:19.470915   56616 main.go:141] libmachine: (bridge-387000) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa (-rw-------)
	I0229 18:49:19.470955   56616 main.go:141] libmachine: (bridge-387000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:49:19.470968   56616 main.go:141] libmachine: (bridge-387000) DBG | About to run SSH command:
	I0229 18:49:19.470980   56616 main.go:141] libmachine: (bridge-387000) DBG | exit 0
	I0229 18:49:19.607432   56616 main.go:141] libmachine: (bridge-387000) DBG | SSH cmd err, output: <nil>: 
	I0229 18:49:19.607699   56616 main.go:141] libmachine: (bridge-387000) KVM machine creation complete!
	I0229 18:49:19.608049   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:49:19.608585   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:19.608830   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:19.608991   56616 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:49:19.609007   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:19.610370   56616 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:49:19.610388   56616 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:49:19.610394   56616 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:49:19.610400   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.612950   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.613296   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.613330   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.613454   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.613634   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.613806   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.613963   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.614133   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.614382   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.614398   56616 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:49:19.734312   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:49:19.734336   56616 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:49:19.734347   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.737388   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.737754   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.737783   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.737904   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.738096   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.738281   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.738431   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.738649   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.738844   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.738856   56616 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:49:19.848276   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:49:19.848367   56616 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:49:19.848392   56616 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:49:19.848407   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:19.848649   56616 buildroot.go:166] provisioning hostname "bridge-387000"
	I0229 18:49:19.848673   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:19.848904   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.851556   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.851862   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.851890   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.852064   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.852270   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.852422   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.852549   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.852682   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.852868   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.852886   56616 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-387000 && echo "bridge-387000" | sudo tee /etc/hostname
	I0229 18:49:19.979574   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-387000
	
	I0229 18:49:19.979606   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:19.982451   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.982866   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:19.982892   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:19.983066   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:19.983280   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.983460   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:19.983660   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:19.983817   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:19.984024   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:19.984047   56616 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-387000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-387000/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-387000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:49:20.103897   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:49:20.103928   56616 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
	I0229 18:49:20.103958   56616 buildroot.go:174] setting up certificates
	I0229 18:49:20.103970   56616 provision.go:83] configureAuth start
	I0229 18:49:20.103979   56616 main.go:141] libmachine: (bridge-387000) Calling .GetMachineName
	I0229 18:49:20.104245   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.107051   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.107486   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.107524   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.107722   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.109836   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.110236   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.110275   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.110385   56616 provision.go:138] copyHostCerts
	I0229 18:49:20.110458   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
	I0229 18:49:20.110479   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
	I0229 18:49:20.110574   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
	I0229 18:49:20.110744   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
	I0229 18:49:20.110757   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
	I0229 18:49:20.110791   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
	I0229 18:49:20.110880   56616 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
	I0229 18:49:20.110891   56616 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
	I0229 18:49:20.110917   56616 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
	I0229 18:49:20.111011   56616 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.bridge-387000 san=[192.168.72.206 192.168.72.206 localhost 127.0.0.1 minikube bridge-387000]
	I0229 18:49:20.410804   56616 provision.go:172] copyRemoteCerts
	I0229 18:49:20.410861   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:49:20.410881   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.413655   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.414043   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.414071   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.414332   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.414499   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.414691   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.414834   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.497270   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:49:20.525867   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 18:49:20.552476   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:49:20.580240   56616 provision.go:86] duration metric: configureAuth took 476.257842ms
	I0229 18:49:20.580265   56616 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:49:20.580428   56616 config.go:182] Loaded profile config "bridge-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:20.580448   56616 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:49:20.580457   56616 main.go:141] libmachine: (bridge-387000) Calling .GetURL
	I0229 18:49:20.581631   56616 main.go:141] libmachine: (bridge-387000) DBG | Using libvirt version 6000000
	I0229 18:49:20.584136   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.584479   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.584506   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.584667   56616 main.go:141] libmachine: Docker is up and running!
	I0229 18:49:20.584681   56616 main.go:141] libmachine: Reticulating splines...
	I0229 18:49:20.584686   56616 client.go:171] LocalClient.Create took 25.283791742s
	I0229 18:49:20.584706   56616 start.go:167] duration metric: libmachine.API.Create for "bridge-387000" took 25.283858614s
	I0229 18:49:20.584723   56616 start.go:300] post-start starting for "bridge-387000" (driver="kvm2")
	I0229 18:49:20.584747   56616 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:49:20.584769   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.584984   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:49:20.585022   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.587316   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.587635   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.587662   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.587854   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.588015   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.588157   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.588290   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.670225   56616 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:49:20.675577   56616 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:49:20.675602   56616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
	I0229 18:49:20.675691   56616 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
	I0229 18:49:20.675776   56616 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
	I0229 18:49:20.675893   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:49:20.686838   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:20.716655   56616 start.go:303] post-start completed in 131.906511ms
	I0229 18:49:20.716705   56616 main.go:141] libmachine: (bridge-387000) Calling .GetConfigRaw
	I0229 18:49:20.717307   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.720080   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.720472   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.720500   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.720732   56616 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/config.json ...
	I0229 18:49:20.720904   56616 start.go:128] duration metric: createHost completed in 25.440945694s
	I0229 18:49:20.720926   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.723089   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.723459   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.723488   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.723652   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.723817   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.723971   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.724130   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.724299   56616 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:20.724453   56616 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0229 18:49:20.724464   56616 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:49:20.835532   56616 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232560.812399754
	
	I0229 18:49:20.835558   56616 fix.go:206] guest clock: 1709232560.812399754
	I0229 18:49:20.835568   56616 fix.go:219] Guest: 2024-02-29 18:49:20.812399754 +0000 UTC Remote: 2024-02-29 18:49:20.720917042 +0000 UTC m=+29.991679352 (delta=91.482712ms)
	I0229 18:49:20.835595   56616 fix.go:190] guest clock delta is within tolerance: 91.482712ms
	I0229 18:49:20.835607   56616 start.go:83] releasing machines lock for "bridge-387000", held for 25.555852929s
	I0229 18:49:20.835636   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.835933   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:20.838370   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.838785   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.838813   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.838942   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839411   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839578   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:20.839675   56616 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:49:20.839726   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.839837   56616 ssh_runner.go:195] Run: cat /version.json
	I0229 18:49:20.839861   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:20.842320   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842594   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842695   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.842726   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.842906   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.843005   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:20.843025   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:20.843079   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.843212   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.843281   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:20.843352   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.843414   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:20.843535   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:20.843651   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:20.942136   56616 ssh_runner.go:195] Run: systemctl --version
	I0229 18:49:20.949065   56616 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:49:20.955541   56616 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:49:20.955595   56616 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:49:20.973448   56616 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:49:20.973470   56616 start.go:475] detecting cgroup driver to use...
	I0229 18:49:20.973533   56616 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:49:21.005357   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:49:21.022925   56616 docker.go:217] disabling cri-docker service (if available) ...
	I0229 18:49:21.022981   56616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 18:49:21.039112   56616 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 18:49:21.057247   56616 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 18:49:21.212886   56616 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 18:49:21.370010   56616 docker.go:233] disabling docker service ...
	I0229 18:49:21.370083   56616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 18:49:21.386034   56616 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 18:49:21.400407   56616 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 18:49:21.536035   56616 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 18:49:21.676667   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 18:49:21.692656   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:49:21.713885   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:49:21.725087   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:49:21.736570   56616 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:49:21.736626   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:49:21.748441   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:49:21.760072   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:49:21.771541   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:49:21.783060   56616 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:49:21.795082   56616 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:49:21.806530   56616 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:49:21.817316   56616 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 18:49:21.817363   56616 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 18:49:21.831798   56616 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:49:21.841878   56616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:21.966256   56616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:21.998492   56616 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0229 18:49:21.998590   56616 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:49:22.004519   56616 retry.go:31] will retry after 549.732234ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0229 18:49:22.555304   56616 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0229 18:49:22.561632   56616 start.go:543] Will wait 60s for crictl version
	I0229 18:49:22.561691   56616 ssh_runner.go:195] Run: which crictl
	I0229 18:49:22.566407   56616 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:49:22.608717   56616 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0229 18:49:22.608795   56616 ssh_runner.go:195] Run: containerd --version
	I0229 18:49:22.640336   56616 ssh_runner.go:195] Run: containerd --version
	I0229 18:49:22.680031   56616 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0229 18:49:18.571782   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.072566   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:19.572515   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:20.071582   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:20.571812   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:21.072193   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:21.572064   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.071516   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.571542   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:23.072295   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:22.681459   56616 main.go:141] libmachine: (bridge-387000) Calling .GetIP
	I0229 18:49:22.684177   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:22.684547   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:22.684578   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:22.684769   56616 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:49:22.690361   56616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:22.707604   56616 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 18:49:22.707655   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:22.745534   56616 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 18:49:22.745594   56616 ssh_runner.go:195] Run: which lz4
	I0229 18:49:22.750429   56616 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:49:22.755295   56616 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:49:22.755334   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0229 18:49:24.704924   56616 containerd.go:548] Took 1.954529 seconds to copy over tarball
	I0229 18:49:24.705016   56616 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:49:23.572426   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:24.071621   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:24.572409   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:25.071577   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:25.571832   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:26.071861   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:26.572338   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:27.072199   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:27.571695   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:28.072553   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:28.101160   56616 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.396112309s)
	I0229 18:49:28.101213   56616 containerd.go:555] Took 3.396233 seconds to extract the tarball
	I0229 18:49:28.101226   56616 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:49:28.154976   56616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:49:28.289077   56616 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:49:28.320670   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:28.355635   56616 retry.go:31] will retry after 138.96002ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T18:49:28Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0229 18:49:28.495035   56616 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 18:49:28.553988   56616 containerd.go:612] all images are preloaded for containerd runtime.
	I0229 18:49:28.554009   56616 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:49:28.554060   56616 ssh_runner.go:195] Run: sudo crictl info
	I0229 18:49:28.600664   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:49:28.600693   56616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:49:28.600714   56616 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.206 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-387000 NodeName:bridge-387000 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:49:28.600848   56616 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-387000"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:49:28.600942   56616 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=bridge-387000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0229 18:49:28.601002   56616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:49:28.616140   56616 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:49:28.616209   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:49:28.633664   56616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0229 18:49:28.658539   56616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:49:28.684967   56616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0229 18:49:28.710713   56616 ssh_runner.go:195] Run: grep 192.168.72.206	control-plane.minikube.internal$ /etc/hosts
	I0229 18:49:28.716105   56616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:28.733367   56616 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000 for IP: 192.168.72.206
	I0229 18:49:28.733415   56616 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.733573   56616 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
	I0229 18:49:28.733623   56616 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
	I0229 18:49:28.733680   56616 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key
	I0229 18:49:28.733694   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt with IP's: []
	I0229 18:49:28.791010   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt ...
	I0229 18:49:28.791038   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.crt: {Name:mk50543b4974f7b0d4a09fb2870e44081bb4582d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.835058   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key ...
	I0229 18:49:28.835097   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/client.key: {Name:mke5f475ff44a7d60f463fae93efe5254b8a5c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:28.835232   56616 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3
	I0229 18:49:28.835254   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 with IP's: [192.168.72.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:49:29.019897   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 ...
	I0229 18:49:29.019921   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3: {Name:mk4dac7431c0dfd64561c8fd1f0f4cb186755cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.020052   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3 ...
	I0229 18:49:29.020064   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3: {Name:mk4f643b99bfb2c97bb2ca84f2a221c98ae6ea1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.020133   56616 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt.b63212e3 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt
	I0229 18:49:29.020216   56616 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key.b63212e3 -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key
	I0229 18:49:29.020287   56616 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key
	I0229 18:49:29.020300   56616 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt with IP's: []
	I0229 18:49:29.156471   56616 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt ...
	I0229 18:49:29.156495   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt: {Name:mk039e7b11fadfb2bda49a067152e4dd8bb9c470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.156662   56616 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key ...
	I0229 18:49:29.156676   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key: {Name:mkb90e2c6b23fc807ff57dc47401135b79347487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:29.156862   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
	W0229 18:49:29.156903   56616 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
	I0229 18:49:29.156911   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:49:29.156936   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:49:29.156960   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:49:29.156984   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
	I0229 18:49:29.157020   56616 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
	I0229 18:49:29.157577   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:49:29.190200   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:49:29.220613   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:49:29.266511   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/bridge-387000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:49:29.295203   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:49:29.327955   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:49:29.356419   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:49:29.387697   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:49:29.420770   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:49:29.449894   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
	I0229 18:49:29.479202   56616 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
	I0229 18:49:29.510064   56616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:49:29.530121   56616 ssh_runner.go:195] Run: openssl version
	I0229 18:49:29.536839   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:49:29.550163   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.555595   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.555652   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:29.562567   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:49:29.576224   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
	I0229 18:49:29.591681   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.598637   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.598695   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
	I0229 18:49:29.607598   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
	I0229 18:49:29.623118   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
	I0229 18:49:29.636932   56616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.642481   56616 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.642577   56616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
	I0229 18:49:29.649436   56616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:49:29.664325   56616 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:49:29.671243   56616 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:49:29.671303   56616 kubeadm.go:404] StartCluster: {Name:bridge-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:bridge-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:49:29.671391   56616 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0229 18:49:29.671456   56616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 18:49:29.719205   56616 cri.go:89] found id: ""
	I0229 18:49:29.719276   56616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:49:29.731745   56616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:49:29.742889   56616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:29.755399   56616 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:29.755457   56616 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:49:29.814723   56616 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:49:29.814800   56616 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:49:29.971746   56616 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:49:29.971886   56616 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:49:29.972034   56616 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:49:30.252167   56616 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:49:28.572433   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:29.147556   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:29.571509   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:30.071581   54985 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:30.255345   54985 kubeadm.go:1088] duration metric: took 12.883640109s to wait for elevateKubeSystemPrivileges.
	I0229 18:49:30.255372   54985 kubeadm.go:406] StartCluster complete in 25.237326714s
	I0229 18:49:30.255392   54985 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:30.255456   54985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:49:30.256879   54985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:30.257134   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:49:30.257619   54985 config.go:182] Loaded profile config "flannel-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:30.257690   54985 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:49:30.257770   54985 addons.go:69] Setting storage-provisioner=true in profile "flannel-387000"
	I0229 18:49:30.257837   54985 addons.go:234] Setting addon storage-provisioner=true in "flannel-387000"
	I0229 18:49:30.257881   54985 host.go:66] Checking if "flannel-387000" exists ...
	I0229 18:49:30.258156   54985 addons.go:69] Setting default-storageclass=true in profile "flannel-387000"
	I0229 18:49:30.258171   54985 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-387000"
	I0229 18:49:30.258646   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.258691   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.259051   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.259077   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.278275   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0229 18:49:30.278851   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.278975   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0229 18:49:30.279294   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.279523   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.279542   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.279869   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.280368   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.280404   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.280950   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.280967   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.281355   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.281574   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.284908   54985 addons.go:234] Setting addon default-storageclass=true in "flannel-387000"
	I0229 18:49:30.284946   54985 host.go:66] Checking if "flannel-387000" exists ...
	I0229 18:49:30.285335   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.285363   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.301797   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0229 18:49:30.302404   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.303039   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.303066   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.306458   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41999
	I0229 18:49:30.306990   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.307006   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.307339   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.307517   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.307533   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.307975   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.308574   54985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:30.308623   54985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:30.309181   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:49:30.311670   54985 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:30.254939   56616 out.go:204]   - Generating certificates and keys ...
	I0229 18:49:30.255042   56616 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:49:30.255133   56616 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:49:30.766905   56616 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:49:30.313783   54985 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:30.313802   54985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:49:30.313819   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:49:30.316849   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.317085   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:49:30.317110   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.317298   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:49:30.317478   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:49:30.317580   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:49:30.317664   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:49:30.329670   54985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0229 18:49:30.330177   54985 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:30.330783   54985 main.go:141] libmachine: Using API Version  1
	I0229 18:49:30.330806   54985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:30.331176   54985 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:30.331384   54985 main.go:141] libmachine: (flannel-387000) Calling .GetState
	I0229 18:49:30.333197   54985 main.go:141] libmachine: (flannel-387000) Calling .DriverName
	I0229 18:49:30.334826   54985 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:30.334843   54985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:49:30.334869   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHHostname
	I0229 18:49:30.338144   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.338778   54985 main.go:141] libmachine: (flannel-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:87:55", ip: ""} in network mk-flannel-387000: {Iface:virbr1 ExpiryTime:2024-02-29 19:48:40 +0000 UTC Type:0 Mac:52:54:00:39:87:55 Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:flannel-387000 Clientid:01:52:54:00:39:87:55}
	I0229 18:49:30.338801   54985 main.go:141] libmachine: (flannel-387000) DBG | domain flannel-387000 has defined IP address 192.168.50.138 and MAC address 52:54:00:39:87:55 in network mk-flannel-387000
	I0229 18:49:30.338956   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHPort
	I0229 18:49:30.339157   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHKeyPath
	I0229 18:49:30.339431   54985 main.go:141] libmachine: (flannel-387000) Calling .GetSSHUsername
	I0229 18:49:30.339660   54985 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/flannel-387000/id_rsa Username:docker}
	I0229 18:49:30.502461   54985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:30.539109   54985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:30.554940   54985 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:49:30.782362   54985 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-387000" context rescaled to 1 replicas
	I0229 18:49:30.782402   54985 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:49:30.783987   54985 out.go:177] * Verifying Kubernetes components...
	I0229 18:49:30.785300   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:31.601895   54985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099398912s)
	I0229 18:49:31.601940   54985 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.062804544s)
	I0229 18:49:31.601978   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.601990   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.601990   54985 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.047018605s)
	I0229 18:49:31.602013   54985 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 18:49:31.601947   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.602042   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.603410   54985 node_ready.go:35] waiting up to 15m0s for node "flannel-387000" to be "Ready" ...
	I0229 18:49:31.604154   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.604179   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.604183   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.604208   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.604218   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.604224   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.604226   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.604232   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.604240   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.604247   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.607823   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.607844   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.607926   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.607950   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.607911   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.607996   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.618857   54985 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:31.618881   54985 main.go:141] libmachine: (flannel-387000) Calling .Close
	I0229 18:49:31.619130   54985 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:31.619149   54985 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:31.619162   54985 main.go:141] libmachine: (flannel-387000) DBG | Closing plugin on server side
	I0229 18:49:31.620981   54985 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:49:31.033759   56616 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:49:31.120891   56616 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:49:31.463853   56616 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:49:31.551551   56616 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:49:31.551893   56616 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [bridge-387000 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0229 18:49:31.722990   56616 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:49:31.723158   56616 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [bridge-387000 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0229 18:49:31.825373   56616 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:49:32.063471   56616 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:49:32.222614   56616 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:49:32.223114   56616 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:49:32.510014   56616 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:49:32.655275   56616 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:49:32.784615   56616 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:49:33.064676   56616 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:49:33.065222   56616 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:49:33.070795   56616 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:49:31.622660   54985 addons.go:505] enable addons completed in 1.364975416s: enabled=[storage-provisioner default-storageclass]
	I0229 18:49:33.072609   56616 out.go:204]   - Booting up control plane ...
	I0229 18:49:33.072726   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:49:33.072814   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:49:33.073366   56616 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:49:33.093436   56616 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:49:33.094135   56616 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:49:33.094181   56616 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:49:33.255460   56616 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:49:33.608730   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:36.109664   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:39.758816   56616 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504601 seconds
	I0229 18:49:39.758957   56616 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:49:39.776368   56616 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:49:40.309919   56616 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:49:40.310085   56616 kubeadm.go:322] [mark-control-plane] Marking the node bridge-387000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:49:40.825578   56616 kubeadm.go:322] [bootstrap-token] Using token: 48g59o.us88bsv20d2vcd89
	I0229 18:49:40.826978   56616 out.go:204]   - Configuring RBAC rules ...
	I0229 18:49:40.827126   56616 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:49:40.832889   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:49:40.847681   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:49:40.851826   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:49:40.860535   56616 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:49:40.863938   56616 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:49:40.879602   56616 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:49:41.139767   56616 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:49:41.247367   56616 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:49:41.248543   56616 kubeadm.go:322] 
	I0229 18:49:41.248664   56616 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:49:41.248690   56616 kubeadm.go:322] 
	I0229 18:49:41.248783   56616 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:49:41.248792   56616 kubeadm.go:322] 
	I0229 18:49:41.248824   56616 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:49:41.248897   56616 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:49:41.248960   56616 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:49:41.248968   56616 kubeadm.go:322] 
	I0229 18:49:41.249052   56616 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:49:41.249061   56616 kubeadm.go:322] 
	I0229 18:49:41.249127   56616 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:49:41.249136   56616 kubeadm.go:322] 
	I0229 18:49:41.249198   56616 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:49:41.249301   56616 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:49:41.249386   56616 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:49:41.249395   56616 kubeadm.go:322] 
	I0229 18:49:41.249495   56616 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:49:41.249590   56616 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:49:41.249599   56616 kubeadm.go:322] 
	I0229 18:49:41.249698   56616 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 48g59o.us88bsv20d2vcd89 \
	I0229 18:49:41.249827   56616 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e \
	I0229 18:49:41.249854   56616 kubeadm.go:322] 	--control-plane 
	I0229 18:49:41.249864   56616 kubeadm.go:322] 
	I0229 18:49:41.249960   56616 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:49:41.249973   56616 kubeadm.go:322] 
	I0229 18:49:41.250084   56616 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 48g59o.us88bsv20d2vcd89 \
	I0229 18:49:41.250208   56616 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1f7ebe59c801ba2f1986d866504c67423c29af63db37f66e58865c4cb8ee981e 
	I0229 18:49:41.250696   56616 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:49:41.250727   56616 cni.go:84] Creating CNI manager for "bridge"
	I0229 18:49:41.252404   56616 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:49:38.607932   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:41.107735   54985 node_ready.go:58] node "flannel-387000" has status "Ready":"False"
	I0229 18:49:42.609430   54985 node_ready.go:49] node "flannel-387000" has status "Ready":"True"
	I0229 18:49:42.609457   54985 node_ready.go:38] duration metric: took 11.006003925s waiting for node "flannel-387000" to be "Ready" ...
	I0229 18:49:42.609471   54985 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:42.624116   54985 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:41.253727   56616 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:49:41.269984   56616 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:49:41.328740   56616 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:49:41.328774   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:41.328809   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=bridge-387000 minikube.k8s.io/updated_at=2024_02_29T18_49_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:41.621377   56616 ops.go:34] apiserver oom_adj: -16
	I0229 18:49:41.621528   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:42.121610   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:42.621559   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:43.122111   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:43.622388   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.122162   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.622423   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:45.121523   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:45.622490   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:44.641882   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:47.130938   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:46.122475   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:46.622252   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:47.121950   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:47.622166   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:48.121847   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:48.621831   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.122205   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.621597   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:50.122536   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:50.622202   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:49.131339   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:51.135261   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:51.122456   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:51.621671   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:52.122623   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:52.622270   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:53.122350   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:53.622060   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:54.121825   56616 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:49:54.257142   56616 kubeadm.go:1088] duration metric: took 12.928415741s to wait for elevateKubeSystemPrivileges.
	I0229 18:49:54.257178   56616 kubeadm.go:406] StartCluster complete in 24.585883521s
	I0229 18:49:54.257204   56616 settings.go:142] acquiring lock: {Name:mk54a855ef147e30c2cf7f1217afa4524cb1d2a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:54.257277   56616 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:49:54.258372   56616 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/kubeconfig: {Name:mk5f8fb7db84beb25fa22fdc3301133bb69ddfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:54.258640   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:49:54.258784   56616 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:49:54.258852   56616 config.go:182] Loaded profile config "bridge-387000": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:49:54.258864   56616 addons.go:69] Setting storage-provisioner=true in profile "bridge-387000"
	I0229 18:49:54.258895   56616 addons.go:234] Setting addon storage-provisioner=true in "bridge-387000"
	I0229 18:49:54.258901   56616 addons.go:69] Setting default-storageclass=true in profile "bridge-387000"
	I0229 18:49:54.258931   56616 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-387000"
	I0229 18:49:54.258950   56616 host.go:66] Checking if "bridge-387000" exists ...
	I0229 18:49:54.259398   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.259398   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.259445   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.259466   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.274567   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0229 18:49:54.277110   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.277767   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.277790   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.278206   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.278446   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.278866   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0229 18:49:54.279297   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.279768   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.279792   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.280118   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.280678   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.280726   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.281973   56616 addons.go:234] Setting addon default-storageclass=true in "bridge-387000"
	I0229 18:49:54.282013   56616 host.go:66] Checking if "bridge-387000" exists ...
	I0229 18:49:54.282392   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.282445   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.295870   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0229 18:49:54.296283   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.296777   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.296801   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.297117   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.297321   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.299340   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:54.301174   56616 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:54.301561   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0229 18:49:54.302562   56616 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:54.302576   56616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:49:54.302594   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:54.303053   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.303865   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.303882   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.304245   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.304750   56616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:49:54.304780   56616 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:49:54.306137   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.306593   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:54.306618   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.306870   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:54.307074   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:54.307243   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:54.307481   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:54.319774   56616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0229 18:49:54.320146   56616 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:49:54.320676   56616 main.go:141] libmachine: Using API Version  1
	I0229 18:49:54.320699   56616 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:49:54.320988   56616 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:49:54.321177   56616 main.go:141] libmachine: (bridge-387000) Calling .GetState
	I0229 18:49:54.322823   56616 main.go:141] libmachine: (bridge-387000) Calling .DriverName
	I0229 18:49:54.323092   56616 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:54.323118   56616 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:49:54.323138   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHHostname
	I0229 18:49:54.325627   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.326037   56616 main.go:141] libmachine: (bridge-387000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:3d:17", ip: ""} in network mk-bridge-387000: {Iface:virbr4 ExpiryTime:2024-02-29 19:49:12 +0000 UTC Type:0 Mac:52:54:00:e7:3d:17 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:bridge-387000 Clientid:01:52:54:00:e7:3d:17}
	I0229 18:49:54.326059   56616 main.go:141] libmachine: (bridge-387000) DBG | domain bridge-387000 has defined IP address 192.168.72.206 and MAC address 52:54:00:e7:3d:17 in network mk-bridge-387000
	I0229 18:49:54.326323   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHPort
	I0229 18:49:54.326509   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHKeyPath
	I0229 18:49:54.326717   56616 main.go:141] libmachine: (bridge-387000) Calling .GetSSHUsername
	I0229 18:49:54.326866   56616 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/bridge-387000/id_rsa Username:docker}
	I0229 18:49:54.474211   56616 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:49:54.543120   56616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:49:54.565025   56616 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:49:54.763970   56616 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-387000" context rescaled to 1 replicas
	I0229 18:49:54.764002   56616 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0229 18:49:54.765491   56616 out.go:177] * Verifying Kubernetes components...
	I0229 18:49:54.766776   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:49:56.093245   56616 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.619001058s)
	I0229 18:49:56.093271   56616 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0229 18:49:56.244809   56616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.679752426s)
	I0229 18:49:56.244861   56616 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.478052227s)
	I0229 18:49:56.244872   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.244885   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245061   56616 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.701909954s)
	I0229 18:49:56.245092   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245103   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245193   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245254   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245272   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245286   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245296   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245387   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245418   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245425   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245433   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.245439   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.245527   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245550   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245557   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.245677   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.245753   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.245810   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.246815   56616 node_ready.go:35] waiting up to 15m0s for node "bridge-387000" to be "Ready" ...
	I0229 18:49:56.262485   56616 node_ready.go:49] node "bridge-387000" has status "Ready":"True"
	I0229 18:49:56.262508   56616 node_ready.go:38] duration metric: took 15.661393ms waiting for node "bridge-387000" to be "Ready" ...
	I0229 18:49:56.262519   56616 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:49:56.269209   56616 main.go:141] libmachine: Making call to close driver server
	I0229 18:49:56.269249   56616 main.go:141] libmachine: (bridge-387000) Calling .Close
	I0229 18:49:56.269550   56616 main.go:141] libmachine: (bridge-387000) DBG | Closing plugin on server side
	I0229 18:49:56.269585   56616 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:49:56.269596   56616 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:49:56.271325   56616 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:49:53.636200   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:56.135510   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:56.274698   56616 addons.go:505] enable addons completed in 2.015917545s: enabled=[storage-provisioner default-storageclass]
	I0229 18:49:56.273154   56616 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:56.777557   56616 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6h7vf" not found
	I0229 18:49:56.777590   56616 pod_ready.go:81] duration metric: took 502.858226ms waiting for pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace to be "Ready" ...
	E0229 18:49:56.777602   56616 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6h7vf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6h7vf" not found
	I0229 18:49:56.777610   56616 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:58.784721   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:58.633672   54985 pod_ready.go:102] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"False"
	I0229 18:49:59.646464   54985 pod_ready.go:92] pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.646494   54985 pod_ready.go:81] duration metric: took 17.022352449s waiting for pod "coredns-5dd5756b68-qxt8h" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.646508   54985 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.660404   54985 pod_ready.go:92] pod "etcd-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.660434   54985 pod_ready.go:81] duration metric: took 13.918303ms waiting for pod "etcd-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.660448   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.668034   54985 pod_ready.go:92] pod "kube-apiserver-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.668068   54985 pod_ready.go:81] duration metric: took 7.603659ms waiting for pod "kube-apiserver-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.668081   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.675598   54985 pod_ready.go:92] pod "kube-controller-manager-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.675622   54985 pod_ready.go:81] duration metric: took 7.532168ms waiting for pod "kube-controller-manager-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.675635   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9lqms" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.684548   54985 pod_ready.go:92] pod "kube-proxy-9lqms" in "kube-system" namespace has status "Ready":"True"
	I0229 18:49:59.684565   54985 pod_ready.go:81] duration metric: took 8.922978ms waiting for pod "kube-proxy-9lqms" in "kube-system" namespace to be "Ready" ...
	I0229 18:49:59.684573   54985 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:00.028950   54985 pod_ready.go:92] pod "kube-scheduler-flannel-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:00.028971   54985 pod_ready.go:81] duration metric: took 344.392651ms waiting for pod "kube-scheduler-flannel-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:00.028982   54985 pod_ready.go:38] duration metric: took 17.419480623s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:50:00.029001   54985 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:50:00.029056   54985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:50:00.045556   54985 api_server.go:72] duration metric: took 29.263117975s to wait for apiserver process to appear ...
	I0229 18:50:00.045578   54985 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:50:00.045596   54985 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0229 18:50:00.055284   54985 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0229 18:50:00.056565   54985 api_server.go:141] control plane version: v1.28.4
	I0229 18:50:00.056586   54985 api_server.go:131] duration metric: took 11.003224ms to wait for apiserver health ...
	I0229 18:50:00.056594   54985 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:50:00.231132   54985 system_pods.go:59] 7 kube-system pods found
	I0229 18:50:00.231160   54985 system_pods.go:61] "coredns-5dd5756b68-qxt8h" [91c2382c-26ba-4455-8de8-609d87672c39] Running
	I0229 18:50:00.231165   54985 system_pods.go:61] "etcd-flannel-387000" [22a964de-c428-4fb8-8838-84573dcdce1a] Running
	I0229 18:50:00.231169   54985 system_pods.go:61] "kube-apiserver-flannel-387000" [03e54dd2-7b60-453a-990b-3645f0bf3963] Running
	I0229 18:50:00.231173   54985 system_pods.go:61] "kube-controller-manager-flannel-387000" [f912185e-07ba-4237-b0b6-82afb0a8eb0c] Running
	I0229 18:50:00.231176   54985 system_pods.go:61] "kube-proxy-9lqms" [cf865127-44ac-4dbb-b8d9-2e94bc3129bd] Running
	I0229 18:50:00.231179   54985 system_pods.go:61] "kube-scheduler-flannel-387000" [899d65db-9a8e-47ae-81bf-efffd7c9b62a] Running
	I0229 18:50:00.231185   54985 system_pods.go:61] "storage-provisioner" [b7d6d993-2e51-4ddd-a08c-3ee2ffd13c11] Running
	I0229 18:50:00.231191   54985 system_pods.go:74] duration metric: took 174.59191ms to wait for pod list to return data ...
	I0229 18:50:00.231202   54985 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:50:00.427825   54985 default_sa.go:45] found service account: "default"
	I0229 18:50:00.427848   54985 default_sa.go:55] duration metric: took 196.638007ms for default service account to be created ...
	I0229 18:50:00.427856   54985 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:50:00.630992   54985 system_pods.go:86] 7 kube-system pods found
	I0229 18:50:00.631017   54985 system_pods.go:89] "coredns-5dd5756b68-qxt8h" [91c2382c-26ba-4455-8de8-609d87672c39] Running
	I0229 18:50:00.631023   54985 system_pods.go:89] "etcd-flannel-387000" [22a964de-c428-4fb8-8838-84573dcdce1a] Running
	I0229 18:50:00.631033   54985 system_pods.go:89] "kube-apiserver-flannel-387000" [03e54dd2-7b60-453a-990b-3645f0bf3963] Running
	I0229 18:50:00.631037   54985 system_pods.go:89] "kube-controller-manager-flannel-387000" [f912185e-07ba-4237-b0b6-82afb0a8eb0c] Running
	I0229 18:50:00.631041   54985 system_pods.go:89] "kube-proxy-9lqms" [cf865127-44ac-4dbb-b8d9-2e94bc3129bd] Running
	I0229 18:50:00.631044   54985 system_pods.go:89] "kube-scheduler-flannel-387000" [899d65db-9a8e-47ae-81bf-efffd7c9b62a] Running
	I0229 18:50:00.631048   54985 system_pods.go:89] "storage-provisioner" [b7d6d993-2e51-4ddd-a08c-3ee2ffd13c11] Running
	I0229 18:50:00.631054   54985 system_pods.go:126] duration metric: took 203.193764ms to wait for k8s-apps to be running ...
	I0229 18:50:00.631060   54985 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:50:00.631100   54985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:50:00.647362   54985 system_svc.go:56] duration metric: took 16.295671ms WaitForService to wait for kubelet.
	I0229 18:50:00.647389   54985 kubeadm.go:581] duration metric: took 29.864953234s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:50:00.647411   54985 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:50:00.828295   54985 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:50:00.828327   54985 node_conditions.go:123] node cpu capacity is 2
	I0229 18:50:00.828337   54985 node_conditions.go:105] duration metric: took 180.921273ms to run NodePressure ...
	I0229 18:50:00.828349   54985 start.go:228] waiting for startup goroutines ...
	I0229 18:50:00.828354   54985 start.go:233] waiting for cluster config update ...
	I0229 18:50:00.828363   54985 start.go:242] writing updated cluster config ...
	I0229 18:50:00.828577   54985 ssh_runner.go:195] Run: rm -f paused
	I0229 18:50:00.875485   54985 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:50:00.877652   54985 out.go:177] * Done! kubectl is now configured to use "flannel-387000" cluster and "default" namespace by default
	I0229 18:50:00.784892   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:03.284667   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:05.785478   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:07.787316   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:10.284704   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:12.785430   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:15.284253   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:17.287335   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:19.784091   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:22.284722   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:24.286943   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:26.785520   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:28.785970   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:30.786304   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:33.285494   56616 pod_ready.go:102] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"False"
	I0229 18:50:35.284449   56616 pod_ready.go:92] pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.284470   56616 pod_ready.go:81] duration metric: took 38.506852963s waiting for pod "coredns-5dd5756b68-hpkhw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.284479   56616 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.289780   56616 pod_ready.go:92] pod "etcd-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.289799   56616 pod_ready.go:81] duration metric: took 5.315104ms waiting for pod "etcd-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.289807   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.294824   56616 pod_ready.go:92] pod "kube-apiserver-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.294842   56616 pod_ready.go:81] duration metric: took 5.028182ms waiting for pod "kube-apiserver-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.294852   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.299640   56616 pod_ready.go:92] pod "kube-controller-manager-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.299654   56616 pod_ready.go:81] duration metric: took 4.795712ms waiting for pod "kube-controller-manager-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.299661   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-mkwsw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.304614   56616 pod_ready.go:92] pod "kube-proxy-mkwsw" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.304626   56616 pod_ready.go:81] duration metric: took 4.960046ms waiting for pod "kube-proxy-mkwsw" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.304633   56616 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.681642   56616 pod_ready.go:92] pod "kube-scheduler-bridge-387000" in "kube-system" namespace has status "Ready":"True"
	I0229 18:50:35.681664   56616 pod_ready.go:81] duration metric: took 377.024979ms waiting for pod "kube-scheduler-bridge-387000" in "kube-system" namespace to be "Ready" ...
	I0229 18:50:35.681673   56616 pod_ready.go:38] duration metric: took 39.41914281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:50:35.681686   56616 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:50:35.681729   56616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:50:35.698571   56616 api_server.go:72] duration metric: took 40.934535224s to wait for apiserver process to appear ...
	I0229 18:50:35.698596   56616 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:50:35.698612   56616 api_server.go:253] Checking apiserver healthz at https://192.168.72.206:8443/healthz ...
	I0229 18:50:35.703217   56616 api_server.go:279] https://192.168.72.206:8443/healthz returned 200:
	ok
	I0229 18:50:35.704671   56616 api_server.go:141] control plane version: v1.28.4
	I0229 18:50:35.704693   56616 api_server.go:131] duration metric: took 6.09165ms to wait for apiserver health ...
	I0229 18:50:35.704700   56616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:50:35.884622   56616 system_pods.go:59] 7 kube-system pods found
	I0229 18:50:35.884650   56616 system_pods.go:61] "coredns-5dd5756b68-hpkhw" [451828e2-19b3-4425-b363-75fffabf5390] Running
	I0229 18:50:35.884654   56616 system_pods.go:61] "etcd-bridge-387000" [8f1f0795-62bc-4013-be47-19f384d6457e] Running
	I0229 18:50:35.884658   56616 system_pods.go:61] "kube-apiserver-bridge-387000" [8c7bd96b-9ce4-4036-9b1d-afd35eb17b6a] Running
	I0229 18:50:35.884661   56616 system_pods.go:61] "kube-controller-manager-bridge-387000" [152cc6f1-67ff-4972-84ab-8a09faac9c4d] Running
	I0229 18:50:35.884664   56616 system_pods.go:61] "kube-proxy-mkwsw" [8dff43d1-caa4-4fea-ae29-cc3d55c585f4] Running
	I0229 18:50:35.884666   56616 system_pods.go:61] "kube-scheduler-bridge-387000" [66a3a2c5-c283-4afb-9124-3e2242ab2cab] Running
	I0229 18:50:35.884669   56616 system_pods.go:61] "storage-provisioner" [b7deeece-3360-41fd-9102-5ff10007f1e5] Running
	I0229 18:50:35.884680   56616 system_pods.go:74] duration metric: took 179.975669ms to wait for pod list to return data ...
	I0229 18:50:35.884687   56616 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:50:36.082244   56616 default_sa.go:45] found service account: "default"
	I0229 18:50:36.082267   56616 default_sa.go:55] duration metric: took 197.571615ms for default service account to be created ...
	I0229 18:50:36.082274   56616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:50:36.286398   56616 system_pods.go:86] 7 kube-system pods found
	I0229 18:50:36.286426   56616 system_pods.go:89] "coredns-5dd5756b68-hpkhw" [451828e2-19b3-4425-b363-75fffabf5390] Running
	I0229 18:50:36.286432   56616 system_pods.go:89] "etcd-bridge-387000" [8f1f0795-62bc-4013-be47-19f384d6457e] Running
	I0229 18:50:36.286436   56616 system_pods.go:89] "kube-apiserver-bridge-387000" [8c7bd96b-9ce4-4036-9b1d-afd35eb17b6a] Running
	I0229 18:50:36.286440   56616 system_pods.go:89] "kube-controller-manager-bridge-387000" [152cc6f1-67ff-4972-84ab-8a09faac9c4d] Running
	I0229 18:50:36.286447   56616 system_pods.go:89] "kube-proxy-mkwsw" [8dff43d1-caa4-4fea-ae29-cc3d55c585f4] Running
	I0229 18:50:36.286452   56616 system_pods.go:89] "kube-scheduler-bridge-387000" [66a3a2c5-c283-4afb-9124-3e2242ab2cab] Running
	I0229 18:50:36.286456   56616 system_pods.go:89] "storage-provisioner" [b7deeece-3360-41fd-9102-5ff10007f1e5] Running
	I0229 18:50:36.286462   56616 system_pods.go:126] duration metric: took 204.182782ms to wait for k8s-apps to be running ...
	I0229 18:50:36.286468   56616 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:50:36.286508   56616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:50:36.305908   56616 system_svc.go:56] duration metric: took 19.43363ms WaitForService to wait for kubelet.
	I0229 18:50:36.305933   56616 kubeadm.go:581] duration metric: took 41.541901185s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:50:36.305950   56616 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:50:36.482097   56616 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:50:36.482123   56616 node_conditions.go:123] node cpu capacity is 2
	I0229 18:50:36.482135   56616 node_conditions.go:105] duration metric: took 176.181312ms to run NodePressure ...
	I0229 18:50:36.482145   56616 start.go:228] waiting for startup goroutines ...
	I0229 18:50:36.482151   56616 start.go:233] waiting for cluster config update ...
	I0229 18:50:36.482161   56616 start.go:242] writing updated cluster config ...
	I0229 18:50:36.482394   56616 ssh_runner.go:195] Run: rm -f paused
	I0229 18:50:36.531423   56616 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:50:36.533299   56616 out.go:177] * Done! kubectl is now configured to use "bridge-387000" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> containerd <==
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.986741472Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987004867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987056148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987317854Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987441432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987501059Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987549532Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987598311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987871455Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseR
untimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMiss
ingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/mnt/vda1/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/mnt/vda1/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.987985628Z" level=info msg="Connect containerd service"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988045901Z" level=info msg="using legacy CRI server"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988078198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.988266930Z" level=info msg="Get image filesystem path \"/mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989037697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989295153Z" level=info msg="Start subscribing containerd event"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.989377058Z" level=info msg="Start recovering state"
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990279282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Feb 29 18:38:31 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:31.990471179Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034388260Z" level=info msg="Start event monitor"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034498239Z" level=info msg="Start snapshots syncer"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034517505Z" level=info msg="Start cni network conf syncer for default"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034527207Z" level=info msg="Start streaming server"
	Feb 29 18:38:32 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:38:32.034557306Z" level=info msg="containerd successfully booted in 0.090065s"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052015588Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/87-podman-bridge.conflist.mk_disabled\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Feb 29 18:42:48 old-k8s-version-561577 containerd[620]: time="2024-02-29T18:42:48.052339514Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/.keep\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052023] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.656777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.325515] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.730699] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.588499] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +0.058606] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067411] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.168224] systemd-fstab-generator[509]: Ignoring "noauto" option for root device
	[  +0.171213] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.318931] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +5.900853] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.061557] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.980225] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.271847] systemd-fstab-generator[868]: Ignoring "noauto" option for root device
	[Feb29 18:42] systemd-fstab-generator[7934]: Ignoring "noauto" option for root device
	[  +0.070905] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9618]: Ignoring "noauto" option for root device
	[  +0.076239] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:01:39 up 23 min,  0 users,  load average: 0.00, 0.07, 0.11
	Linux old-k8s-version-561577 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 19:01:37 old-k8s-version-561577 kubelet[23913]: F0229 19:01:37.693534   23913 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:01:37 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:01:37 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:01:38 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1348.
	Feb 29 19:01:38 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:01:38 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: I0229 19:01:38.447274   23923 server.go:410] Version: v1.16.0
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: I0229 19:01:38.447706   23923 plugins.go:100] No cloud provider specified.
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: I0229 19:01:38.447756   23923 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: I0229 19:01:38.449953   23923 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: W0229 19:01:38.452337   23923 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:01:38 old-k8s-version-561577 kubelet[23923]: F0229 19:01:38.452454   23923 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:01:38 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:01:38 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:01:39 old-k8s-version-561577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1349.
	Feb 29 19:01:39 old-k8s-version-561577 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:01:39 old-k8s-version-561577 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: I0229 19:01:39.199858   23962 server.go:410] Version: v1.16.0
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: I0229 19:01:39.200037   23962 plugins.go:100] No cloud provider specified.
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: I0229 19:01:39.200048   23962 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: I0229 19:01:39.202330   23962 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: W0229 19:01:39.215955   23962 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:01:39 old-k8s-version-561577 kubelet[23962]: F0229 19:01:39.216029   23962 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:01:39 old-k8s-version-561577 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:01:39 old-k8s-version-561577 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 2 (252.583181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-561577" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (354.00s)

                                                
                                    

Test pass (266/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 70.79
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 56.56
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.13
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.29.0-rc.2/json-events 55.9
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.13
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 86.07
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 215.25
38 TestAddons/parallel/Registry 19.81
39 TestAddons/parallel/Ingress 21.91
40 TestAddons/parallel/InspektorGadget 10.98
41 TestAddons/parallel/MetricsServer 6.19
42 TestAddons/parallel/HelmTiller 16.69
44 TestAddons/parallel/CSI 56.66
45 TestAddons/parallel/Headlamp 14.36
46 TestAddons/parallel/CloudSpanner 5.66
47 TestAddons/parallel/LocalPath 62.41
48 TestAddons/parallel/NvidiaDevicePlugin 6.56
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.51
54 TestCertOptions 72.69
55 TestCertExpiration 281.47
57 TestForceSystemdFlag 66.31
58 TestForceSystemdEnv 46.93
60 TestKVMDriverInstallOrUpdate 9.09
64 TestErrorSpam/setup 43.93
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.75
67 TestErrorSpam/pause 1.59
68 TestErrorSpam/unpause 1.67
69 TestErrorSpam/stop 1.58
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.45
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.26
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 7.68
81 TestFunctional/serial/CacheCmd/cache/add_local 2.96
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 44.58
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.53
92 TestFunctional/serial/LogsFileCmd 1.56
93 TestFunctional/serial/InvalidService 4.07
95 TestFunctional/parallel/ConfigCmd 0.35
96 TestFunctional/parallel/DashboardCmd 14.62
97 TestFunctional/parallel/DryRun 0.39
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.01
103 TestFunctional/parallel/ServiceCmdConnect 12.57
104 TestFunctional/parallel/AddonsCmd 0.14
105 TestFunctional/parallel/PersistentVolumeClaim 47.56
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.66
109 TestFunctional/parallel/MySQL 34.17
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.62
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 0.8
120 TestFunctional/parallel/Version/short 0.09
121 TestFunctional/parallel/Version/components 0.77
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.39
127 TestFunctional/parallel/ImageCommands/Setup 2.76
128 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
130 TestFunctional/parallel/ProfileCmd/profile_list 0.27
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
132 TestFunctional/parallel/MountCmd/any-port 10.32
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.34
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.06
136 TestFunctional/parallel/ServiceCmd/List 0.31
137 TestFunctional/parallel/MountCmd/specific-port 1.85
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
140 TestFunctional/parallel/ServiceCmd/Format 0.47
141 TestFunctional/parallel/ServiceCmd/URL 0.34
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.53
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.18
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
172 TestJSONOutput/start/Command 60.69
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.75
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.64
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.1
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 92.46
204 TestMountStart/serial/StartWithMountFirst 32.7
205 TestMountStart/serial/VerifyMountFirst 0.37
206 TestMountStart/serial/StartWithMountSecond 28.75
207 TestMountStart/serial/VerifyMountSecond 0.38
208 TestMountStart/serial/DeleteFirst 0.89
209 TestMountStart/serial/VerifyMountPostDelete 0.38
210 TestMountStart/serial/Stop 1.18
211 TestMountStart/serial/RestartStopped 27.32
212 TestMountStart/serial/VerifyMountPostStop 0.37
215 TestMultiNode/serial/FreshStart2Nodes 188.58
216 TestMultiNode/serial/DeployApp2Nodes 6.24
217 TestMultiNode/serial/PingHostFrom2Pods 0.88
218 TestMultiNode/serial/AddNode 45.26
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.2
221 TestMultiNode/serial/CopyFile 7.6
222 TestMultiNode/serial/StopNode 2.14
223 TestMultiNode/serial/StartAfterStop 23.77
224 TestMultiNode/serial/RestartKeepsNodes 310.58
225 TestMultiNode/serial/DeleteNode 1.7
226 TestMultiNode/serial/StopMultiNode 183.59
227 TestMultiNode/serial/RestartMultiNode 87.08
228 TestMultiNode/serial/ValidateNameConflict 47.58
233 TestPreload 258.6
235 TestScheduledStopUnix 118.3
239 TestRunningBinaryUpgrade 224.29
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 98.48
254 TestPause/serial/Start 122.13
255 TestNoKubernetes/serial/StartWithStopK8s 79.81
256 TestPause/serial/SecondStartNoReconfiguration 8.33
257 TestPause/serial/Pause 1.59
258 TestNoKubernetes/serial/Start 28.28
259 TestPause/serial/VerifyStatus 0.29
260 TestPause/serial/Unpause 0.92
261 TestPause/serial/PauseAgain 1.03
262 TestPause/serial/DeletePaused 0.87
263 TestPause/serial/VerifyDeletedResources 3.55
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
265 TestNoKubernetes/serial/ProfileList 14.73
266 TestNoKubernetes/serial/Stop 1.34
267 TestNoKubernetes/serial/StartNoArgs 28.21
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
276 TestNetworkPlugins/group/false 3.39
280 TestStoppedBinaryUpgrade/Setup 3.7
281 TestStoppedBinaryUpgrade/Upgrade 198.23
285 TestStartStop/group/no-preload/serial/FirstStart 211.54
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
288 TestStartStop/group/embed-certs/serial/FirstStart 61.18
289 TestStartStop/group/embed-certs/serial/DeployApp 11.34
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
291 TestStartStop/group/embed-certs/serial/Stop 92.26
292 TestStartStop/group/no-preload/serial/DeployApp 11.33
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
294 TestStartStop/group/no-preload/serial/Stop 92.25
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.8
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
300 TestStartStop/group/embed-certs/serial/SecondStart 331.43
301 TestStartStop/group/old-k8s-version/serial/Stop 1.36
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
305 TestStartStop/group/no-preload/serial/SecondStart 348.22
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.26
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 331.96
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
314 TestStartStop/group/embed-certs/serial/Pause 2.71
316 TestStartStop/group/newest-cni/serial/FirstStart 59.14
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.01
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
319 TestStartStop/group/newest-cni/serial/DeployApp 0
320 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
322 TestStartStop/group/no-preload/serial/Pause 2.89
323 TestStartStop/group/newest-cni/serial/Stop 2.1
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/newest-cni/serial/SecondStart 44.91
326 TestNetworkPlugins/group/auto/Start 123.72
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
330 TestStartStop/group/newest-cni/serial/Pause 2.73
331 TestNetworkPlugins/group/kindnet/Start 69.35
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.76
336 TestNetworkPlugins/group/calico/Start 101.96
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
339 TestNetworkPlugins/group/kindnet/NetCatPod 12.55
340 TestNetworkPlugins/group/auto/KubeletFlags 0.22
341 TestNetworkPlugins/group/auto/NetCatPod 10.36
342 TestNetworkPlugins/group/auto/DNS 0.47
343 TestNetworkPlugins/group/auto/Localhost 0.17
344 TestNetworkPlugins/group/kindnet/DNS 0.19
345 TestNetworkPlugins/group/auto/HairPin 0.15
346 TestNetworkPlugins/group/kindnet/Localhost 0.18
347 TestNetworkPlugins/group/kindnet/HairPin 0.15
349 TestNetworkPlugins/group/custom-flannel/Start 85.94
350 TestNetworkPlugins/group/enable-default-cni/Start 132.59
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.36
353 TestNetworkPlugins/group/calico/NetCatPod 12.72
354 TestNetworkPlugins/group/calico/DNS 0.18
355 TestNetworkPlugins/group/calico/Localhost 0.15
356 TestNetworkPlugins/group/calico/HairPin 0.15
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
358 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
359 TestNetworkPlugins/group/flannel/Start 97.79
360 TestNetworkPlugins/group/custom-flannel/DNS 0.45
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
363 TestNetworkPlugins/group/bridge/Start 105.82
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
371 TestNetworkPlugins/group/flannel/NetCatPod 10.22
372 TestNetworkPlugins/group/flannel/DNS 0.18
373 TestNetworkPlugins/group/flannel/Localhost 0.14
374 TestNetworkPlugins/group/flannel/HairPin 0.12
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
376 TestNetworkPlugins/group/bridge/NetCatPod 9.23
377 TestNetworkPlugins/group/bridge/DNS 0.16
378 TestNetworkPlugins/group/bridge/Localhost 0.12
379 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (70.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-567726 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-567726 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m10.787770332s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (70.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-567726
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-567726: exit status 85 (69.425225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-567726 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |          |
	|         | -p download-only-567726        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:01.586295   13732 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:38:01.586566   13732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:01.586575   13732 out.go:304] Setting ErrFile to fd 2...
	I0229 17:38:01.586579   13732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:01.586805   13732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	W0229 17:38:01.586956   13732 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18259-6412/.minikube/config/config.json: open /home/jenkins/minikube-integration/18259-6412/.minikube/config/config.json: no such file or directory
	I0229 17:38:01.587562   13732 out.go:298] Setting JSON to true
	I0229 17:38:01.588444   13732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1223,"bootTime":1709227059,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:38:01.588501   13732 start.go:139] virtualization: kvm guest
	I0229 17:38:01.590750   13732 out.go:97] [download-only-567726] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:38:01.592192   13732 out.go:169] MINIKUBE_LOCATION=18259
	W0229 17:38:01.590843   13732 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 17:38:01.590877   13732 notify.go:220] Checking for updates...
	I0229 17:38:01.593514   13732 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:38:01.594762   13732 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:38:01.595980   13732 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:38:01.597122   13732 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:38:01.599801   13732 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:01.599998   13732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:01.695224   13732 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:38:01.695254   13732 start.go:299] selected driver: kvm2
	I0229 17:38:01.695265   13732 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:38:01.695604   13732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:01.695713   13732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:38:01.709768   13732 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:38:01.709809   13732 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:01.710255   13732 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:38:01.710388   13732 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:01.710449   13732 cni.go:84] Creating CNI manager for ""
	I0229 17:38:01.710462   13732 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 17:38:01.710470   13732 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:38:01.710476   13732 start_flags.go:323] config:
	{Name:download-only-567726 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-567726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:01.710713   13732 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:01.712659   13732 out.go:97] Downloading VM boot image ...
	I0229 17:38:01.712690   13732 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:38:13.859828   13732 out.go:97] Starting control plane node download-only-567726 in cluster download-only-567726
	I0229 17:38:13.859865   13732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 17:38:14.040297   13732 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 17:38:14.040334   13732 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:14.040474   13732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 17:38:14.042452   13732 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 17:38:14.042470   13732 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:38:14.192447   13732 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0229 17:38:39.006954   13732 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:38:39.007052   13732 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:38:39.845234   13732 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0229 17:38:39.845558   13732 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-567726/config.json ...
	I0229 17:38:39.845591   13732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-567726/config.json: {Name:mk9c50c452421a61145bb71d57725fee39842872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:38:39.845734   13732 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0229 17:38:39.845884   13732 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-567726"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-567726
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (56.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-160839 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-160839 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (56.559153707s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (56.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-160839
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-160839: exit status 85 (72.290443ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-567726 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-567726        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-567726        | download-only-567726 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| start   | -o=json --download-only        | download-only-160839 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | -p download-only-160839        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:39:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:39:12.713088   14045 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:39:12.713286   14045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:12.713301   14045 out.go:304] Setting ErrFile to fd 2...
	I0229 17:39:12.713313   14045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:12.713778   14045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:39:12.714675   14045 out.go:298] Setting JSON to true
	I0229 17:39:12.715491   14045 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1294,"bootTime":1709227059,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:39:12.715551   14045 start.go:139] virtualization: kvm guest
	I0229 17:39:12.717979   14045 out.go:97] [download-only-160839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:39:12.719534   14045 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:39:12.718102   14045 notify.go:220] Checking for updates...
	I0229 17:39:12.722327   14045 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:39:12.723783   14045 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:39:12.725047   14045 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:39:12.726329   14045 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:39:12.728718   14045 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:39:12.728920   14045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:39:12.759317   14045 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:39:12.759354   14045 start.go:299] selected driver: kvm2
	I0229 17:39:12.759359   14045 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:39:12.759690   14045 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:12.759750   14045 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:39:12.774291   14045 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:39:12.774380   14045 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:39:12.774874   14045 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:39:12.774996   14045 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:39:12.775045   14045 cni.go:84] Creating CNI manager for ""
	I0229 17:39:12.775057   14045 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 17:39:12.775064   14045 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:39:12.775075   14045 start_flags.go:323] config:
	{Name:download-only-160839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-160839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:39:12.775194   14045 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:12.776880   14045 out.go:97] Starting control plane node download-only-160839 in cluster download-only-160839
	I0229 17:39:12.776895   14045 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 17:39:12.935220   14045 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 17:39:12.935252   14045 cache.go:56] Caching tarball of preloaded images
	I0229 17:39:12.935394   14045 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 17:39:12.937478   14045 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 17:39:12.937501   14045 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:39:13.091861   14045 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0229 17:39:29.525865   14045 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:39:29.525958   14045 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:39:30.396275   14045 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0229 17:39:30.396647   14045 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-160839/config.json ...
	I0229 17:39:30.396685   14045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-160839/config.json: {Name:mkf9677b7618ac9fe0770ceff3c316fcb6d7a7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:39:30.396869   14045 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0229 17:39:30.397039   14045 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-160839"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-160839
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (55.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-892412 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-892412 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (55.900172026s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (55.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-892412
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-892412: exit status 85 (70.929256ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-567726 | jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-567726           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-567726           | download-only-567726 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| start   | -o=json --download-only           | download-only-160839 | jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | -p download-only-160839           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| delete  | -p download-only-160839           | download-only-160839 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC | 29 Feb 24 17:40 UTC |
	| start   | -o=json --download-only           | download-only-892412 | jenkins | v1.32.0 | 29 Feb 24 17:40 UTC |                     |
	|         | -p download-only-892412           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:40:09
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:40:09.604405   14315 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:40:09.604570   14315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:40:09.604580   14315 out.go:304] Setting ErrFile to fd 2...
	I0229 17:40:09.604587   14315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:40:09.604799   14315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:40:09.605372   14315 out.go:298] Setting JSON to true
	I0229 17:40:09.606195   14315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1351,"bootTime":1709227059,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:40:09.606260   14315 start.go:139] virtualization: kvm guest
	I0229 17:40:09.608298   14315 out.go:97] [download-only-892412] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:40:09.609840   14315 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:40:09.608419   14315 notify.go:220] Checking for updates...
	I0229 17:40:09.612428   14315 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:40:09.613885   14315 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:40:09.615163   14315 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:40:09.616538   14315 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:40:09.619325   14315 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:40:09.619564   14315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:40:09.650259   14315 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:40:09.650293   14315 start.go:299] selected driver: kvm2
	I0229 17:40:09.650304   14315 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:40:09.650736   14315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:40:09.650819   14315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:40:09.664887   14315 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:40:09.664939   14315 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:40:09.665359   14315 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:40:09.665490   14315 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:40:09.665551   14315 cni.go:84] Creating CNI manager for ""
	I0229 17:40:09.665570   14315 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0229 17:40:09.665579   14315 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:40:09.665590   14315 start_flags.go:323] config:
	{Name:download-only-892412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-892412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:40:09.665702   14315 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:40:09.667403   14315 out.go:97] Starting control plane node download-only-892412 in cluster download-only-892412
	I0229 17:40:09.667420   14315 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 17:40:09.821243   14315 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0229 17:40:09.821285   14315 cache.go:56] Caching tarball of preloaded images
	I0229 17:40:09.821456   14315 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 17:40:09.823276   14315 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 17:40:09.823297   14315 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:40:09.977009   14315 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0229 17:40:29.560601   14315 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:40:29.560686   14315 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0229 17:40:30.319611   14315 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0229 17:40:30.319941   14315 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-892412/config.json ...
	I0229 17:40:30.319970   14315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/download-only-892412/config.json: {Name:mkebc01277c3f212db76a1d43f00e48f878f1cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:40:30.320127   14315 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0229 17:40:30.320256   14315 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-892412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-892412
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-215397 --alsologtostderr --binary-mirror http://127.0.0.1:35197 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-215397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-215397
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (86.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-352762 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-352762 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m24.612704191s)
helpers_test.go:175: Cleaning up "offline-containerd-352762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-352762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-352762: (1.458453655s)
--- PASS: TestOffline (86.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-771161
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-771161: exit status 85 (63.355731ms)

                                                
                                                
-- stdout --
	* Profile "addons-771161" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-771161"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-771161
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-771161: exit status 85 (64.182129ms)

                                                
                                                
-- stdout --
	* Profile "addons-771161" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-771161"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (215.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-771161 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-771161 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m35.246468883s)
--- PASS: TestAddons/Setup (215.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 32.606692ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2hchm" [789834d2-f485-4d1e-aa91-8f1ad59eaf21] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.014559954s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hv87q" [6ede1b7c-395d-4f7d-a587-82c5c0d615ac] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.008511618s
addons_test.go:340: (dbg) Run:  kubectl --context addons-771161 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-771161 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-771161 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.977053179s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 ip
2024/02/29 17:45:01 [DEBUG] GET http://192.168.39.70:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-771161 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-771161 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-771161 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b013521f-a627-4737-9479-3be198d1c9bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b013521f-a627-4737-9479-3be198d1c9bc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004436039s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-771161 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.70
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-771161 addons disable ingress-dns --alsologtostderr -v=1: (1.848590879s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-771161 addons disable ingress --alsologtostderr -v=1: (7.858225267s)
--- PASS: TestAddons/parallel/Ingress (21.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c7ffc" [727c8e6c-f9d1-43a4-be13-bc0425c0fa2a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00772858s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-771161
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-771161: (5.96819904s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 32.608424ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-cl9m6" [6d5415b1-7e8a-4670-9962-10d8c32a1776] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007114561s
addons_test.go:415: (dbg) Run:  kubectl --context addons-771161 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-771161 addons disable metrics-server --alsologtostderr -v=1: (1.086230162s)
--- PASS: TestAddons/parallel/MetricsServer (6.19s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 32.581105ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-d9z5t" [8e935566-9082-4e9e-92fb-3a9b263900a1] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.022688489s
addons_test.go:473: (dbg) Run:  kubectl --context addons-771161 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-771161 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.886186081s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 33.582304ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-771161 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-771161 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ad813d01-c784-4b49-9a1d-f53cf8726068] Pending
helpers_test.go:344: "task-pv-pod" [ad813d01-c784-4b49-9a1d-f53cf8726068] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ad813d01-c784-4b49-9a1d-f53cf8726068] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.005460358s
addons_test.go:584: (dbg) Run:  kubectl --context addons-771161 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-771161 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-771161 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-771161 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-771161 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-771161 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-771161 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [453820ef-ed8f-443e-ab1b-eaeb482ea88d] Pending
helpers_test.go:344: "task-pv-pod-restore" [453820ef-ed8f-443e-ab1b-eaeb482ea88d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [453820ef-ed8f-443e-ab1b-eaeb482ea88d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.005284521s
addons_test.go:626: (dbg) Run:  kubectl --context addons-771161 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-771161 delete pod task-pv-pod-restore: (1.554245916s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-771161 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-771161 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-771161 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.339225953s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-771161 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-771161 --alsologtostderr -v=1: (1.352193244s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-fh4xc" [4baca48d-2c8e-46fa-88d7-ee3baf4be1fc] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-fh4xc" [4baca48d-2c8e-46fa-88d7-ee3baf4be1fc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-fh4xc" [4baca48d-2c8e-46fa-88d7-ee3baf4be1fc] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004843705s
--- PASS: TestAddons/parallel/Headlamp (14.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-ffn5k" [1d5b7743-3398-478c-90f7-7ddf5a51b491] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005004914s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-771161
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-771161 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-771161 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cd36fb08-1d38-4612-9f25-4a686a524f3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cd36fb08-1d38-4612-9f25-4a686a524f3c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cd36fb08-1d38-4612-9f25-4a686a524f3c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.005756249s
addons_test.go:891: (dbg) Run:  kubectl --context addons-771161 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 ssh "cat /opt/local-path-provisioner/pvc-222145ee-3557-42de-a16f-87d4491f0b88_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-771161 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-771161 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-771161 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-771161 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.589315023s)
--- PASS: TestAddons/parallel/LocalPath (62.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q666x" [80d53628-cec1-4f11-af24-3bfee06d9b46] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005022281s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-771161
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-tld5h" [bd197b03-00e1-4466-bc97-100b5143b57c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009908253s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-771161 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-771161 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-771161
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-771161: (1m32.223088651s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-771161
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-771161
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-771161
--- PASS: TestAddons/StoppedEnableDisable (92.51s)

                                                
                                    
x
+
TestCertOptions (72.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-153536 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-153536 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m11.423394112s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-153536 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-153536 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-153536 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-153536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-153536
--- PASS: TestCertOptions (72.69s)

                                                
                                    
x
+
TestCertExpiration (281.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-829233 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-829233 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m32.952619962s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-829233 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-829233 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (7.688309254s)
helpers_test.go:175: Cleaning up "cert-expiration-829233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-829233
--- PASS: TestCertExpiration (281.47s)

                                                
                                    
x
+
TestForceSystemdFlag (66.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-477484 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-477484 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m5.045601263s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-477484 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-477484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-477484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-477484: (1.047399143s)
--- PASS: TestForceSystemdFlag (66.31s)

                                                
                                    
x
+
TestForceSystemdEnv (46.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-403978 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-403978 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (45.892595597s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-403978 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-403978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-403978
--- PASS: TestForceSystemdEnv (46.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (9.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (9.09s)

                                                
                                    
x
+
TestErrorSpam/setup (43.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-736536 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-736536 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-736536 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-736536 --driver=kvm2  --container-runtime=containerd: (43.925647974s)
--- PASS: TestErrorSpam/setup (43.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 stop: (1.421807434s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736536 --log_dir /tmp/nospam-736536 stop
--- PASS: TestErrorSpam/stop (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/test/nested/copy/13721/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0229 17:49:42.039823   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.045465   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.055682   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.075910   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.116158   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.196489   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.356997   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:42.677550   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:43.318475   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:44.598983   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:47.159213   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:49:52.279807   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:50:02.520809   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-296731 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.44743678s)
--- PASS: TestFunctional/serial/StartWithProxy (99.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --alsologtostderr -v=8
E0229 17:50:23.001729   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-296731 --alsologtostderr -v=8: (6.26047134s)
functional_test.go:659: soft start took 6.26105207s for "functional-296731" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-296731 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:3.1: (1.156005418s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:3.3: (3.280943075s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 cache add registry.k8s.io/pause:latest: (3.24307828s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-296731 /tmp/TestFunctionalserialCacheCmdcacheadd_local405887566/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache add minikube-local-cache-test:functional-296731
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 cache add minikube-local-cache-test:functional-296731: (2.632405965s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache delete minikube-local-cache-test:functional-296731
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-296731
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.467974ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 cache reload: (1.137541707s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 kubectl -- --context functional-296731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-296731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0229 17:51:03.963534   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-296731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.583334922s)
functional_test.go:757: restart took 44.583468602s for "functional-296731" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-296731 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 logs: (1.527269837s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 logs --file /tmp/TestFunctionalserialLogsFileCmd2906151913/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 logs --file /tmp/TestFunctionalserialLogsFileCmd2906151913/001/logs.txt: (1.560748614s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-296731 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-296731
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-296731: exit status 115 (282.996424ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.40:30776 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-296731 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 config get cpus: exit status 14 (56.937453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 config get cpus: exit status 14 (55.392456ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-296731 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-296731 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21578: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-296731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (158.651599ms)

                                                
                                                
-- stdout --
	* [functional-296731] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:51:46.828737   20898 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:51:46.828817   20898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:51:46.828824   20898 out.go:304] Setting ErrFile to fd 2...
	I0229 17:51:46.828829   20898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:51:46.829014   20898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:51:46.829520   20898 out.go:298] Setting JSON to false
	I0229 17:51:46.830373   20898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2048,"bootTime":1709227059,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:51:46.830434   20898 start.go:139] virtualization: kvm guest
	I0229 17:51:46.832955   20898 out.go:177] * [functional-296731] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:51:46.834594   20898 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:51:46.834608   20898 notify.go:220] Checking for updates...
	I0229 17:51:46.836751   20898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:51:46.838174   20898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:51:46.839618   20898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:51:46.841231   20898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:51:46.842877   20898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:51:46.844763   20898 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 17:51:46.845348   20898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:51:46.845401   20898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:51:46.862584   20898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0229 17:51:46.863036   20898 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:51:46.863669   20898 main.go:141] libmachine: Using API Version  1
	I0229 17:51:46.863701   20898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:51:46.864031   20898 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:51:46.864264   20898 main.go:141] libmachine: (functional-296731) Calling .DriverName
	I0229 17:51:46.864539   20898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:51:46.864937   20898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:51:46.864979   20898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:51:46.880112   20898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I0229 17:51:46.880489   20898 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:51:46.880940   20898 main.go:141] libmachine: Using API Version  1
	I0229 17:51:46.880964   20898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:51:46.881356   20898 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:51:46.881570   20898 main.go:141] libmachine: (functional-296731) Calling .DriverName
	I0229 17:51:46.916249   20898 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 17:51:46.917572   20898 start.go:299] selected driver: kvm2
	I0229 17:51:46.917598   20898 start.go:903] validating driver "kvm2" against &{Name:functional-296731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-296731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:51:46.917727   20898 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:51:46.920039   20898 out.go:177] 
	W0229 17:51:46.921383   20898 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 17:51:46.922831   20898 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-296731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-296731 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (169.449893ms)

                                                
                                                
-- stdout --
	* [functional-296731] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:51:46.668536   20837 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:51:46.668828   20837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:51:46.668838   20837 out.go:304] Setting ErrFile to fd 2...
	I0229 17:51:46.668843   20837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:51:46.669145   20837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 17:51:46.669635   20837 out.go:298] Setting JSON to false
	I0229 17:51:46.671840   20837 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2048,"bootTime":1709227059,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:51:46.671899   20837 start.go:139] virtualization: kvm guest
	I0229 17:51:46.674209   20837 out.go:177] * [functional-296731] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 17:51:46.675672   20837 notify.go:220] Checking for updates...
	I0229 17:51:46.675689   20837 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:51:46.677013   20837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:51:46.678516   20837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 17:51:46.679853   20837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 17:51:46.681158   20837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:51:46.682402   20837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:51:46.684260   20837 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 17:51:46.684814   20837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:51:46.684868   20837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:51:46.700301   20837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40589
	I0229 17:51:46.700688   20837 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:51:46.701203   20837 main.go:141] libmachine: Using API Version  1
	I0229 17:51:46.701221   20837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:51:46.701685   20837 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:51:46.701849   20837 main.go:141] libmachine: (functional-296731) Calling .DriverName
	I0229 17:51:46.702115   20837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:51:46.702446   20837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 17:51:46.702479   20837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:51:46.720548   20837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0229 17:51:46.721019   20837 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:51:46.721520   20837 main.go:141] libmachine: Using API Version  1
	I0229 17:51:46.721570   20837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:51:46.721865   20837 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:51:46.722054   20837 main.go:141] libmachine: (functional-296731) Calling .DriverName
	I0229 17:51:46.755617   20837 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 17:51:46.756938   20837 start.go:299] selected driver: kvm2
	I0229 17:51:46.756962   20837 start.go:903] validating driver "kvm2" against &{Name:functional-296731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-296731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:51:46.757085   20837 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:51:46.759489   20837 out.go:177] 
	W0229 17:51:46.760832   20837 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 17:51:46.762443   20837 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-296731 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-296731 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fjd7g" [80c7f1e0-1c4a-4024-9c91-23bc793ef9ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fjd7g" [80c7f1e0-1c4a-4024-9c91-23bc793ef9ee] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.008985567s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.40:32293
functional_test.go:1671: http://192.168.39.40:32293: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fjd7g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.40:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.40:32293
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [47522720-2bf8-4294-80d4-aa9e3fc52bb8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004850473s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-296731 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-296731 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-296731 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-296731 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-296731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6e758594-34cb-4be4-9977-ddbcbbd514f3] Pending
helpers_test.go:344: "sp-pod" [6e758594-34cb-4be4-9977-ddbcbbd514f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/02/29 17:52:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [6e758594-34cb-4be4-9977-ddbcbbd514f3] Running
E0229 17:52:25.883848   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.005279136s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-296731 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-296731 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-296731 delete -f testdata/storage-provisioner/pod.yaml: (1.234699763s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-296731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d30bde9-5bac-4034-aa36-a64d9a460026] Pending
helpers_test.go:344: "sp-pod" [7d30bde9-5bac-4034-aa36-a64d9a460026] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d30bde9-5bac-4034-aa36-a64d9a460026] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005650768s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-296731 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh -n functional-296731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cp functional-296731:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1853773639/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh -n functional-296731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh -n functional-296731 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-296731 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pmfgw" [821d4609-0645-4eed-b1ab-f4394405648f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pmfgw" [821d4609-0645-4eed-b1ab-f4394405648f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.005265357s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;": exit status 1 (138.457302ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;": exit status 1 (150.948892ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;": exit status 1 (202.171929ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-296731 exec mysql-859648c796-pmfgw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13721/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /etc/test/nested/copy/13721/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13721.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /etc/ssl/certs/13721.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13721.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /usr/share/ca-certificates/13721.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/137212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /etc/ssl/certs/137212.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/137212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /usr/share/ca-certificates/137212.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-296731 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "sudo systemctl is-active docker": exit status 1 (229.755414ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "sudo systemctl is-active crio": exit status 1 (212.988699ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-296731 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-296731
docker.io/library/minikube-local-cache-test:functional-296731
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-296731 image ls --format short --alsologtostderr:
I0229 17:51:56.714245   22017 out.go:291] Setting OutFile to fd 1 ...
I0229 17:51:56.714360   22017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:56.714371   22017 out.go:304] Setting ErrFile to fd 2...
I0229 17:51:56.714375   22017 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:56.714598   22017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:51:56.715192   22017 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:56.715311   22017 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:56.715664   22017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:56.715710   22017 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:56.731081   22017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
I0229 17:51:56.731487   22017 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:56.732041   22017 main.go:141] libmachine: Using API Version  1
I0229 17:51:56.732064   22017 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:56.732369   22017 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:56.732571   22017 main.go:141] libmachine: (functional-296731) Calling .GetState
I0229 17:51:56.734371   22017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:56.734413   22017 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:56.748189   22017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
I0229 17:51:56.748581   22017 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:56.748997   22017 main.go:141] libmachine: Using API Version  1
I0229 17:51:56.749018   22017 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:56.749354   22017 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:56.749546   22017 main.go:141] libmachine: (functional-296731) Calling .DriverName
I0229 17:51:56.749757   22017 ssh_runner.go:195] Run: systemctl --version
I0229 17:51:56.749779   22017 main.go:141] libmachine: (functional-296731) Calling .GetSSHHostname
I0229 17:51:56.752626   22017 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:56.753036   22017 main.go:141] libmachine: (functional-296731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:4c:ce", ip: ""} in network mk-functional-296731: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:57 +0000 UTC Type:0 Mac:52:54:00:ba:4c:ce Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-296731 Clientid:01:52:54:00:ba:4c:ce}
I0229 17:51:56.753069   22017 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined IP address 192.168.39.40 and MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:56.753205   22017 main.go:141] libmachine: (functional-296731) Calling .GetSSHPort
I0229 17:51:56.753364   22017 main.go:141] libmachine: (functional-296731) Calling .GetSSHKeyPath
I0229 17:51:56.753485   22017 main.go:141] libmachine: (functional-296731) Calling .GetSSHUsername
I0229 17:51:56.753615   22017 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/functional-296731/id_rsa Username:docker}
I0229 17:51:56.850679   22017 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:51:56.913656   22017 main.go:141] libmachine: Making call to close driver server
I0229 17:51:56.913676   22017 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:51:56.913927   22017 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:51:56.913945   22017 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:51:56.913954   22017 main.go:141] libmachine: Making call to close driver server
I0229 17:51:56.913962   22017 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:51:56.914196   22017 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
I0229 17:51:56.914191   22017 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:51:56.914237   22017 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-296731 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| localhost/my-image                          | functional-296731  | sha256:bad549 | 775kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/google-containers/addon-resizer      | functional-296731  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-296731  | sha256:a19bf1 | 1.01kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-296731 image ls --format table --alsologtostderr:
I0229 17:52:02.068006   22187 out.go:291] Setting OutFile to fd 1 ...
I0229 17:52:02.068224   22187 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:02.068231   22187 out.go:304] Setting ErrFile to fd 2...
I0229 17:52:02.068235   22187 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:02.068435   22187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:52:02.069001   22187 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:52:02.069093   22187 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:52:02.069451   22187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:52:02.069491   22187 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:52:02.084506   22187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
I0229 17:52:02.084952   22187 main.go:141] libmachine: () Calling .GetVersion
I0229 17:52:02.085517   22187 main.go:141] libmachine: Using API Version  1
I0229 17:52:02.085541   22187 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:52:02.085921   22187 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:52:02.086112   22187 main.go:141] libmachine: (functional-296731) Calling .GetState
I0229 17:52:02.087968   22187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:52:02.088009   22187 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:52:02.102337   22187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
I0229 17:52:02.102777   22187 main.go:141] libmachine: () Calling .GetVersion
I0229 17:52:02.103224   22187 main.go:141] libmachine: Using API Version  1
I0229 17:52:02.103248   22187 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:52:02.103627   22187 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:52:02.103812   22187 main.go:141] libmachine: (functional-296731) Calling .DriverName
I0229 17:52:02.104006   22187 ssh_runner.go:195] Run: systemctl --version
I0229 17:52:02.104030   22187 main.go:141] libmachine: (functional-296731) Calling .GetSSHHostname
I0229 17:52:02.106935   22187 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:52:02.107402   22187 main.go:141] libmachine: (functional-296731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:4c:ce", ip: ""} in network mk-functional-296731: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:57 +0000 UTC Type:0 Mac:52:54:00:ba:4c:ce Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-296731 Clientid:01:52:54:00:ba:4c:ce}
I0229 17:52:02.107440   22187 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined IP address 192.168.39.40 and MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:52:02.107576   22187 main.go:141] libmachine: (functional-296731) Calling .GetSSHPort
I0229 17:52:02.107742   22187 main.go:141] libmachine: (functional-296731) Calling .GetSSHKeyPath
I0229 17:52:02.107925   22187 main.go:141] libmachine: (functional-296731) Calling .GetSSHUsername
I0229 17:52:02.108079   22187 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/functional-296731/id_rsa Username:docker}
I0229 17:52:02.216026   22187 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:52:02.299097   22187 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.299117   22187 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.299390   22187 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.299406   22187 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:52:02.299414   22187 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.299414   22187 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
I0229 17:52:02.299421   22187 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.299640   22187 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.299655   22187 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:52:02.299703   22187 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-296731 image ls --format json --alsologtostderr:
[{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:e6f1
816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:a19bf1586e85907687ef261d3b6807be346214ab23ff2377e9bb0daa31c9d9a9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-296731"],"size":"1007"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-296731"],"size":"10823156"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05
baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-pr
oxy:v1.28.4"],"size":"24581402"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-296731 image ls --format json --alsologtostderr:
I0229 17:52:01.820705   22163 out.go:291] Setting OutFile to fd 1 ...
I0229 17:52:01.820861   22163 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:01.820874   22163 out.go:304] Setting ErrFile to fd 2...
I0229 17:52:01.820880   22163 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:01.821124   22163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:52:01.821885   22163 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:52:01.822061   22163 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:52:01.822668   22163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:52:01.822729   22163 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:52:01.837393   22163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39419
I0229 17:52:01.837904   22163 main.go:141] libmachine: () Calling .GetVersion
I0229 17:52:01.838489   22163 main.go:141] libmachine: Using API Version  1
I0229 17:52:01.838517   22163 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:52:01.838936   22163 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:52:01.839166   22163 main.go:141] libmachine: (functional-296731) Calling .GetState
I0229 17:52:01.840971   22163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:52:01.841031   22163 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:52:01.855142   22163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
I0229 17:52:01.855590   22163 main.go:141] libmachine: () Calling .GetVersion
I0229 17:52:01.856059   22163 main.go:141] libmachine: Using API Version  1
I0229 17:52:01.856076   22163 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:52:01.856342   22163 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:52:01.856503   22163 main.go:141] libmachine: (functional-296731) Calling .DriverName
I0229 17:52:01.856666   22163 ssh_runner.go:195] Run: systemctl --version
I0229 17:52:01.856686   22163 main.go:141] libmachine: (functional-296731) Calling .GetSSHHostname
I0229 17:52:01.859045   22163 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:52:01.859354   22163 main.go:141] libmachine: (functional-296731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:4c:ce", ip: ""} in network mk-functional-296731: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:57 +0000 UTC Type:0 Mac:52:54:00:ba:4c:ce Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-296731 Clientid:01:52:54:00:ba:4c:ce}
I0229 17:52:01.859388   22163 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined IP address 192.168.39.40 and MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:52:01.859504   22163 main.go:141] libmachine: (functional-296731) Calling .GetSSHPort
I0229 17:52:01.859656   22163 main.go:141] libmachine: (functional-296731) Calling .GetSSHKeyPath
I0229 17:52:01.859792   22163 main.go:141] libmachine: (functional-296731) Calling .GetSSHUsername
I0229 17:52:01.859918   22163 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/functional-296731/id_rsa Username:docker}
I0229 17:52:01.949443   22163 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:52:02.009637   22163 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.009650   22163 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.009908   22163 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.009929   22163 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:52:02.009933   22163 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
I0229 17:52:02.009937   22163 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.009962   22163 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.010151   22163 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.010168   22163 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:52:02.010231   22163 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-296731 image ls --format yaml --alsologtostderr:
- id: sha256:a19bf1586e85907687ef261d3b6807be346214ab23ff2377e9bb0daa31c9d9a9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-296731
size: "1007"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-296731
size: "10823156"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-296731 image ls --format yaml --alsologtostderr:
I0229 17:51:56.980759   22041 out.go:291] Setting OutFile to fd 1 ...
I0229 17:51:56.980902   22041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:56.980913   22041 out.go:304] Setting ErrFile to fd 2...
I0229 17:51:56.980920   22041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:56.981217   22041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:51:56.982044   22041 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:56.982211   22041 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:56.982819   22041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:56.982876   22041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:56.996915   22041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
I0229 17:51:56.997385   22041 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:56.997955   22041 main.go:141] libmachine: Using API Version  1
I0229 17:51:56.997981   22041 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:56.998342   22041 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:56.998529   22041 main.go:141] libmachine: (functional-296731) Calling .GetState
I0229 17:51:57.000457   22041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:57.000489   22041 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:57.014337   22041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
I0229 17:51:57.014766   22041 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:57.015349   22041 main.go:141] libmachine: Using API Version  1
I0229 17:51:57.015379   22041 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:57.015674   22041 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:57.015868   22041 main.go:141] libmachine: (functional-296731) Calling .DriverName
I0229 17:51:57.016087   22041 ssh_runner.go:195] Run: systemctl --version
I0229 17:51:57.016109   22041 main.go:141] libmachine: (functional-296731) Calling .GetSSHHostname
I0229 17:51:57.018924   22041 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:57.019334   22041 main.go:141] libmachine: (functional-296731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:4c:ce", ip: ""} in network mk-functional-296731: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:57 +0000 UTC Type:0 Mac:52:54:00:ba:4c:ce Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-296731 Clientid:01:52:54:00:ba:4c:ce}
I0229 17:51:57.019367   22041 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined IP address 192.168.39.40 and MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:57.019477   22041 main.go:141] libmachine: (functional-296731) Calling .GetSSHPort
I0229 17:51:57.019651   22041 main.go:141] libmachine: (functional-296731) Calling .GetSSHKeyPath
I0229 17:51:57.019790   22041 main.go:141] libmachine: (functional-296731) Calling .GetSSHUsername
I0229 17:51:57.019932   22041 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/functional-296731/id_rsa Username:docker}
I0229 17:51:57.106158   22041 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:51:57.161872   22041 main.go:141] libmachine: Making call to close driver server
I0229 17:51:57.161891   22041 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:51:57.162178   22041 main.go:141] libmachine: (functional-296731) DBG | Closing plugin on server side
I0229 17:51:57.162201   22041 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:51:57.162214   22041 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:51:57.162223   22041 main.go:141] libmachine: Making call to close driver server
I0229 17:51:57.162231   22041 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:51:57.162434   22041 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:51:57.162449   22041 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh pgrep buildkitd: exit status 1 (199.352598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image build -t localhost/my-image:functional-296731 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image build -t localhost/my-image:functional-296731 testdata/build --alsologtostderr: (4.956885455s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-296731 image build -t localhost/my-image:functional-296731 testdata/build --alsologtostderr:
I0229 17:51:57.428235   22094 out.go:291] Setting OutFile to fd 1 ...
I0229 17:51:57.428426   22094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:57.428440   22094 out.go:304] Setting ErrFile to fd 2...
I0229 17:51:57.428446   22094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:51:57.428684   22094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:51:57.429298   22094 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:57.429790   22094 config.go:182] Loaded profile config "functional-296731": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0229 17:51:57.430206   22094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:57.430240   22094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:57.444790   22094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
I0229 17:51:57.445184   22094 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:57.445679   22094 main.go:141] libmachine: Using API Version  1
I0229 17:51:57.445700   22094 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:57.446054   22094 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:57.446288   22094 main.go:141] libmachine: (functional-296731) Calling .GetState
I0229 17:51:57.448019   22094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:51:57.448050   22094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:51:57.462036   22094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
I0229 17:51:57.462466   22094 main.go:141] libmachine: () Calling .GetVersion
I0229 17:51:57.462926   22094 main.go:141] libmachine: Using API Version  1
I0229 17:51:57.462945   22094 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:51:57.463226   22094 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:51:57.463396   22094 main.go:141] libmachine: (functional-296731) Calling .DriverName
I0229 17:51:57.463580   22094 ssh_runner.go:195] Run: systemctl --version
I0229 17:51:57.463605   22094 main.go:141] libmachine: (functional-296731) Calling .GetSSHHostname
I0229 17:51:57.466166   22094 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:57.466572   22094 main.go:141] libmachine: (functional-296731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:4c:ce", ip: ""} in network mk-functional-296731: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:57 +0000 UTC Type:0 Mac:52:54:00:ba:4c:ce Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-296731 Clientid:01:52:54:00:ba:4c:ce}
I0229 17:51:57.466607   22094 main.go:141] libmachine: (functional-296731) DBG | domain functional-296731 has defined IP address 192.168.39.40 and MAC address 52:54:00:ba:4c:ce in network mk-functional-296731
I0229 17:51:57.466725   22094 main.go:141] libmachine: (functional-296731) Calling .GetSSHPort
I0229 17:51:57.466871   22094 main.go:141] libmachine: (functional-296731) Calling .GetSSHKeyPath
I0229 17:51:57.467016   22094 main.go:141] libmachine: (functional-296731) Calling .GetSSHUsername
I0229 17:51:57.467162   22094 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/functional-296731/id_rsa Username:docker}
I0229 17:51:57.558106   22094 build_images.go:151] Building image from path: /tmp/build.3586806393.tar
I0229 17:51:57.558171   22094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 17:51:57.576714   22094 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3586806393.tar
I0229 17:51:57.584479   22094 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3586806393.tar: stat -c "%s %y" /var/lib/minikube/build/build.3586806393.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3586806393.tar': No such file or directory
I0229 17:51:57.584518   22094 ssh_runner.go:362] scp /tmp/build.3586806393.tar --> /var/lib/minikube/build/build.3586806393.tar (3072 bytes)
I0229 17:51:57.627187   22094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3586806393
I0229 17:51:57.644565   22094 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3586806393 -xf /var/lib/minikube/build/build.3586806393.tar
I0229 17:51:57.666032   22094 containerd.go:379] Building image: /var/lib/minikube/build/build.3586806393
I0229 17:51:57.666109   22094 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3586806393 --local dockerfile=/var/lib/minikube/build/build.3586806393 --output type=image,name=localhost/my-image:functional-296731
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:77b72946292be461d15e871901c0920f4354c2a301a1fbc21952d8ffc104a5cb 0.0s done
#8 exporting config sha256:bad5496ae9a7dcdb26b02ba2bc34699593b8016f80d27fd0a318229289ff0d48 0.0s done
#8 naming to localhost/my-image:functional-296731 done
#8 DONE 0.2s
I0229 17:52:02.261880   22094 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3586806393 --local dockerfile=/var/lib/minikube/build/build.3586806393 --output type=image,name=localhost/my-image:functional-296731: (4.595739936s)
I0229 17:52:02.261952   22094 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3586806393
I0229 17:52:02.304628   22094 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3586806393.tar
I0229 17:52:02.320402   22094 build_images.go:207] Built localhost/my-image:functional-296731 from /tmp/build.3586806393.tar
I0229 17:52:02.320425   22094 build_images.go:123] succeeded building to: functional-296731
I0229 17:52:02.320429   22094 build_images.go:124] failed building to: 
I0229 17:52:02.320446   22094 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.320457   22094 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.320733   22094 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.320750   22094 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:52:02.320760   22094 main.go:141] libmachine: Making call to close driver server
I0229 17:52:02.320769   22094 main.go:141] libmachine: (functional-296731) Calling .Close
I0229 17:52:02.320967   22094 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:52:02.320979   22094 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.735735194s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-296731
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-296731 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-296731 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-shmjt" [f500f301-e06d-4a35-8554-f628b1b69b6a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-shmjt" [f500f301-e06d-4a35-8554-f628b1b69b6a] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005468081s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "215.442188ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "54.79015ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "204.006625ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "56.089702ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdany-port2468389708/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709229094784395308" to /tmp/TestFunctionalparallelMountCmdany-port2468389708/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709229094784395308" to /tmp/TestFunctionalparallelMountCmdany-port2468389708/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709229094784395308" to /tmp/TestFunctionalparallelMountCmdany-port2468389708/001/test-1709229094784395308
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.234642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 17:51 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 17:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 17:51 test-1709229094784395308
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh cat /mount-9p/test-1709229094784395308
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-296731 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [82ba26e8-b9b9-48d4-bca8-283bddf4f9a9] Pending
helpers_test.go:344: "busybox-mount" [82ba26e8-b9b9-48d4-bca8-283bddf4f9a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [82ba26e8-b9b9-48d4-bca8-283bddf4f9a9] Running
helpers_test.go:344: "busybox-mount" [82ba26e8-b9b9-48d4-bca8-283bddf4f9a9] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [82ba26e8-b9b9-48d4-bca8-283bddf4f9a9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00450566s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-296731 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdany-port2468389708/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr: (4.119907608s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr: (2.682802856s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.545511766s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-296731
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image load --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr: (5.267101264s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdspecific-port3787827615/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.117943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdspecific-port3787827615/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "sudo umount -f /mount-9p": exit status 1 (231.963122ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-296731 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdspecific-port3787827615/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service list -o json
functional_test.go:1490: Took "299.247243ms" to run "out/minikube-linux-amd64 -p functional-296731 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.40:30975
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.40:30975
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T" /mount1: exit status 1 (342.573685ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-296731 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-296731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107014322/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image save gcr.io/google-containers/addon-resizer:functional-296731 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image save gcr.io/google-containers/addon-resizer:functional-296731 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.072978629s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image rm gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.300328869s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-296731
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-296731 image save --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-296731 image save --daemon gcr.io/google-containers/addon-resizer:functional-296731 --alsologtostderr: (1.142462963s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-296731
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-296731
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-296731
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-296731
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-711649 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0229 18:01:33.750714   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 18:02:01.434674   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-711649 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m0.693282542s)
--- PASS: TestJSONOutput/start/Command (60.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-711649 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-711649 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-711649 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-711649 --output=json --user=testUser: (7.098199913s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-328404 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-328404 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.491041ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3af8a851-f856-454c-a3b0-eb2aa705cf9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-328404] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c18bf509-6291-47b3-9a8e-110f64a7068a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"c253451d-b01a-4480-9b69-0431e84313bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d6c4cb8-a320-4487-b758-98e79ec2ceb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig"}}
	{"specversion":"1.0","id":"0a321a17-4746-490b-b9ab-23f314395fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube"}}
	{"specversion":"1.0","id":"4f5fb481-f8d7-40ea-bd8b-f89e4522d183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7edfac03-d44b-4918-9a16-585c17b25cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"688adccc-eaa8-4603-9b6e-14c159278633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-328404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-328404
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (92.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-204622 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-204622 --driver=kvm2  --container-runtime=containerd: (45.367458217s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-207526 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-207526 --driver=kvm2  --container-runtime=containerd: (44.457970477s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-204622
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-207526
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-207526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-207526
helpers_test.go:175: Cleaning up "first-204622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-204622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-204622: (1.020857352s)
--- PASS: TestMinikubeProfile (92.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-701798 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-701798 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.702977192s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-701798 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-701798 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-716827 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0229 18:04:42.039288   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-716827 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.745868543s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-701798 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-716827
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-716827: (1.177911411s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-716827
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-716827: (26.322499185s)
--- PASS: TestMountStart/serial/RestartStopped (27.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-716827 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (188.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-583430 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0229 18:06:05.086708   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 18:06:33.751202   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-583430 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m8.164563064s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (188.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-583430 -- rollout status deployment/busybox: (4.347397363s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-vm9n2 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-zdmfl -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-vm9n2 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-zdmfl -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-vm9n2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-zdmfl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-vm9n2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-vm9n2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-zdmfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-583430 -- exec busybox-5b5d89c9d6-zdmfl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-583430 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-583430 -v 3 --alsologtostderr: (44.681782538s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-583430 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp testdata/cp-test.txt multinode-583430:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283626129/001/cp-test_multinode-583430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430:/home/docker/cp-test.txt multinode-583430-m02:/home/docker/cp-test_multinode-583430_multinode-583430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test_multinode-583430_multinode-583430-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430:/home/docker/cp-test.txt multinode-583430-m03:/home/docker/cp-test_multinode-583430_multinode-583430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test_multinode-583430_multinode-583430-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp testdata/cp-test.txt multinode-583430-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283626129/001/cp-test_multinode-583430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m02:/home/docker/cp-test.txt multinode-583430:/home/docker/cp-test_multinode-583430-m02_multinode-583430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test_multinode-583430-m02_multinode-583430.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m02:/home/docker/cp-test.txt multinode-583430-m03:/home/docker/cp-test_multinode-583430-m02_multinode-583430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test_multinode-583430-m02_multinode-583430-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp testdata/cp-test.txt multinode-583430-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4283626129/001/cp-test_multinode-583430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m03:/home/docker/cp-test.txt multinode-583430:/home/docker/cp-test_multinode-583430-m03_multinode-583430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430 "sudo cat /home/docker/cp-test_multinode-583430-m03_multinode-583430.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 cp multinode-583430-m03:/home/docker/cp-test.txt multinode-583430-m02:/home/docker/cp-test_multinode-583430-m03_multinode-583430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 ssh -n multinode-583430-m02 "sudo cat /home/docker/cp-test_multinode-583430-m03_multinode-583430-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-583430 node stop m03: (1.253550745s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-583430 status: exit status 7 (445.451454ms)

                                                
                                                
-- stdout --
	multinode-583430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-583430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-583430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr: exit status 7 (435.655777ms)

                                                
                                                
-- stdout --
	multinode-583430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-583430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-583430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:09:28.045641   29519 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:09:28.045758   29519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:09:28.045767   29519 out.go:304] Setting ErrFile to fd 2...
	I0229 18:09:28.045771   29519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:09:28.045985   29519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:09:28.046134   29519 out.go:298] Setting JSON to false
	I0229 18:09:28.046165   29519 mustload.go:65] Loading cluster: multinode-583430
	I0229 18:09:28.046268   29519 notify.go:220] Checking for updates...
	I0229 18:09:28.046877   29519 config.go:182] Loaded profile config "multinode-583430": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:09:28.046940   29519 status.go:255] checking status of multinode-583430 ...
	I0229 18:09:28.047980   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.048245   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.063003   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0229 18:09:28.063385   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.063930   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.063955   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.064301   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.064522   29519 main.go:141] libmachine: (multinode-583430) Calling .GetState
	I0229 18:09:28.066267   29519 status.go:330] multinode-583430 host status = "Running" (err=<nil>)
	I0229 18:09:28.066284   29519 host.go:66] Checking if "multinode-583430" exists ...
	I0229 18:09:28.066653   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.066714   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.082196   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0229 18:09:28.082537   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.083035   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.083062   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.083379   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.083572   29519 main.go:141] libmachine: (multinode-583430) Calling .GetIP
	I0229 18:09:28.085908   29519 main.go:141] libmachine: (multinode-583430) DBG | domain multinode-583430 has defined MAC address 52:54:00:59:39:09 in network mk-multinode-583430
	I0229 18:09:28.086191   29519 main.go:141] libmachine: (multinode-583430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:39:09", ip: ""} in network mk-multinode-583430: {Iface:virbr1 ExpiryTime:2024-02-29 19:05:32 +0000 UTC Type:0 Mac:52:54:00:59:39:09 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-583430 Clientid:01:52:54:00:59:39:09}
	I0229 18:09:28.086216   29519 main.go:141] libmachine: (multinode-583430) DBG | domain multinode-583430 has defined IP address 192.168.39.123 and MAC address 52:54:00:59:39:09 in network mk-multinode-583430
	I0229 18:09:28.086328   29519 host.go:66] Checking if "multinode-583430" exists ...
	I0229 18:09:28.086627   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.086664   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.100971   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0229 18:09:28.101315   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.101739   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.101766   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.102123   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.102329   29519 main.go:141] libmachine: (multinode-583430) Calling .DriverName
	I0229 18:09:28.102506   29519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:09:28.102529   29519 main.go:141] libmachine: (multinode-583430) Calling .GetSSHHostname
	I0229 18:09:28.104840   29519 main.go:141] libmachine: (multinode-583430) DBG | domain multinode-583430 has defined MAC address 52:54:00:59:39:09 in network mk-multinode-583430
	I0229 18:09:28.105189   29519 main.go:141] libmachine: (multinode-583430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:39:09", ip: ""} in network mk-multinode-583430: {Iface:virbr1 ExpiryTime:2024-02-29 19:05:32 +0000 UTC Type:0 Mac:52:54:00:59:39:09 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-583430 Clientid:01:52:54:00:59:39:09}
	I0229 18:09:28.105223   29519 main.go:141] libmachine: (multinode-583430) DBG | domain multinode-583430 has defined IP address 192.168.39.123 and MAC address 52:54:00:59:39:09 in network mk-multinode-583430
	I0229 18:09:28.105367   29519 main.go:141] libmachine: (multinode-583430) Calling .GetSSHPort
	I0229 18:09:28.105518   29519 main.go:141] libmachine: (multinode-583430) Calling .GetSSHKeyPath
	I0229 18:09:28.105678   29519 main.go:141] libmachine: (multinode-583430) Calling .GetSSHUsername
	I0229 18:09:28.105818   29519 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/multinode-583430/id_rsa Username:docker}
	I0229 18:09:28.195058   29519 ssh_runner.go:195] Run: systemctl --version
	I0229 18:09:28.202162   29519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:09:28.218059   29519 kubeconfig.go:92] found "multinode-583430" server: "https://192.168.39.123:8443"
	I0229 18:09:28.218091   29519 api_server.go:166] Checking apiserver status ...
	I0229 18:09:28.218132   29519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:09:28.233173   29519 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup
	W0229 18:09:28.243160   29519 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:09:28.243212   29519 ssh_runner.go:195] Run: ls
	I0229 18:09:28.248188   29519 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0229 18:09:28.252712   29519 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0229 18:09:28.252736   29519 status.go:421] multinode-583430 apiserver status = Running (err=<nil>)
	I0229 18:09:28.252748   29519 status.go:257] multinode-583430 status: &{Name:multinode-583430 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:09:28.252766   29519 status.go:255] checking status of multinode-583430-m02 ...
	I0229 18:09:28.253087   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.253123   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.267691   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I0229 18:09:28.268056   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.268482   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.268502   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.268797   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.268993   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetState
	I0229 18:09:28.270485   29519 status.go:330] multinode-583430-m02 host status = "Running" (err=<nil>)
	I0229 18:09:28.270505   29519 host.go:66] Checking if "multinode-583430-m02" exists ...
	I0229 18:09:28.270848   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.270887   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.285571   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0229 18:09:28.285939   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.286359   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.286377   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.286724   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.286890   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetIP
	I0229 18:09:28.289405   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | domain multinode-583430-m02 has defined MAC address 52:54:00:51:e8:b8 in network mk-multinode-583430
	I0229 18:09:28.289848   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:e8:b8", ip: ""} in network mk-multinode-583430: {Iface:virbr1 ExpiryTime:2024-02-29 19:06:37 +0000 UTC Type:0 Mac:52:54:00:51:e8:b8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-583430-m02 Clientid:01:52:54:00:51:e8:b8}
	I0229 18:09:28.289871   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | domain multinode-583430-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:51:e8:b8 in network mk-multinode-583430
	I0229 18:09:28.289970   29519 host.go:66] Checking if "multinode-583430-m02" exists ...
	I0229 18:09:28.290243   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.290281   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.305232   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0229 18:09:28.305690   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.306138   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.306159   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.306497   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.306692   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .DriverName
	I0229 18:09:28.306867   29519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:09:28.306900   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetSSHHostname
	I0229 18:09:28.309559   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | domain multinode-583430-m02 has defined MAC address 52:54:00:51:e8:b8 in network mk-multinode-583430
	I0229 18:09:28.309959   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:e8:b8", ip: ""} in network mk-multinode-583430: {Iface:virbr1 ExpiryTime:2024-02-29 19:06:37 +0000 UTC Type:0 Mac:52:54:00:51:e8:b8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-583430-m02 Clientid:01:52:54:00:51:e8:b8}
	I0229 18:09:28.309986   29519 main.go:141] libmachine: (multinode-583430-m02) DBG | domain multinode-583430-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:51:e8:b8 in network mk-multinode-583430
	I0229 18:09:28.310153   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetSSHPort
	I0229 18:09:28.310338   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetSSHKeyPath
	I0229 18:09:28.310469   29519 main.go:141] libmachine: (multinode-583430-m02) Calling .GetSSHUsername
	I0229 18:09:28.311730   29519 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/multinode-583430-m02/id_rsa Username:docker}
	I0229 18:09:28.398503   29519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:09:28.413770   29519 status.go:257] multinode-583430-m02 status: &{Name:multinode-583430-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:09:28.413803   29519 status.go:255] checking status of multinode-583430-m03 ...
	I0229 18:09:28.414109   29519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:09:28.414154   29519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:28.428816   29519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0229 18:09:28.429182   29519 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:28.429636   29519 main.go:141] libmachine: Using API Version  1
	I0229 18:09:28.429665   29519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:28.429969   29519 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:28.430141   29519 main.go:141] libmachine: (multinode-583430-m03) Calling .GetState
	I0229 18:09:28.431521   29519 status.go:330] multinode-583430-m03 host status = "Stopped" (err=<nil>)
	I0229 18:09:28.431535   29519 status.go:343] host is not running, skipping remaining checks
	I0229 18:09:28.431542   29519 status.go:257] multinode-583430-m03 status: &{Name:multinode-583430-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 node start m03 --alsologtostderr
E0229 18:09:42.039114   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-583430 node start m03 --alsologtostderr: (23.129761544s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-583430
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-583430
E0229 18:11:33.750673   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 18:12:56.795774   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-583430: (3m4.682723197s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-583430 --wait=true -v=8 --alsologtostderr
E0229 18:14:42.039268   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-583430 --wait=true -v=8 --alsologtostderr: (2m5.788912843s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-583430
--- PASS: TestMultiNode/serial/RestartKeepsNodes (310.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-583430 node delete m03: (1.163548983s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 stop
E0229 18:16:33.751516   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-583430 stop: (3m3.417174999s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-583430 status: exit status 7 (87.123464ms)

                                                
                                                
-- stdout --
	multinode-583430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-583430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr: exit status 7 (90.201316ms)

                                                
                                                
-- stdout --
	multinode-583430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-583430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:18:08.034207   32112 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:18:08.034323   32112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:18:08.034334   32112 out.go:304] Setting ErrFile to fd 2...
	I0229 18:18:08.034340   32112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:18:08.034578   32112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:18:08.034757   32112 out.go:298] Setting JSON to false
	I0229 18:18:08.034790   32112 mustload.go:65] Loading cluster: multinode-583430
	I0229 18:18:08.034892   32112 notify.go:220] Checking for updates...
	I0229 18:18:08.035206   32112 config.go:182] Loaded profile config "multinode-583430": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:18:08.035223   32112 status.go:255] checking status of multinode-583430 ...
	I0229 18:18:08.035609   32112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:18:08.035681   32112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:18:08.054249   32112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0229 18:18:08.054612   32112 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:18:08.055123   32112 main.go:141] libmachine: Using API Version  1
	I0229 18:18:08.055139   32112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:18:08.055460   32112 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:18:08.055654   32112 main.go:141] libmachine: (multinode-583430) Calling .GetState
	I0229 18:18:08.057095   32112 status.go:330] multinode-583430 host status = "Stopped" (err=<nil>)
	I0229 18:18:08.057104   32112 status.go:343] host is not running, skipping remaining checks
	I0229 18:18:08.057109   32112 status.go:257] multinode-583430 status: &{Name:multinode-583430 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:18:08.057131   32112 status.go:255] checking status of multinode-583430-m02 ...
	I0229 18:18:08.057382   32112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0229 18:18:08.057415   32112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:18:08.071069   32112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0229 18:18:08.071435   32112 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:18:08.071806   32112 main.go:141] libmachine: Using API Version  1
	I0229 18:18:08.071841   32112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:18:08.072133   32112 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:18:08.072283   32112 main.go:141] libmachine: (multinode-583430-m02) Calling .GetState
	I0229 18:18:08.073574   32112 status.go:330] multinode-583430-m02 host status = "Stopped" (err=<nil>)
	I0229 18:18:08.073589   32112 status.go:343] host is not running, skipping remaining checks
	I0229 18:18:08.073597   32112 status.go:257] multinode-583430-m02 status: &{Name:multinode-583430-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-583430 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-583430 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m26.544842843s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-583430 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-583430
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-583430-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-583430-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.195218ms)

                                                
                                                
-- stdout --
	* [multinode-583430-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-583430-m02' is duplicated with machine name 'multinode-583430-m02' in profile 'multinode-583430'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-583430-m03 --driver=kvm2  --container-runtime=containerd
E0229 18:19:42.039157   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-583430-m03 --driver=kvm2  --container-runtime=containerd: (46.267047636s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-583430
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-583430: exit status 80 (227.294002ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-583430
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-583430-m03 already exists in multinode-583430-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-583430-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.58s)

                                                
                                    
x
+
TestPreload (258.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-543870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0229 18:21:33.750669   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 18:22:45.087308   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-543870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m50.250719951s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-543870 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-543870 image pull gcr.io/k8s-minikube/busybox: (3.103613924s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-543870
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-543870: (7.095898311s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-543870 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-543870 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m16.876131383s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-543870 image list
helpers_test.go:175: Cleaning up "test-preload-543870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-543870
E0229 18:24:42.039080   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-543870: (1.05138486s)
--- PASS: TestPreload (258.60s)

                                                
                                    
x
+
TestScheduledStopUnix (118.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-217121 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-217121 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.60224131s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-217121 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-217121 -n scheduled-stop-217121
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-217121 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-217121 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-217121 -n scheduled-stop-217121
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-217121
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-217121 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0229 18:26:33.751281   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-217121
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-217121: exit status 7 (71.767606ms)

                                                
                                                
-- stdout --
	scheduled-stop-217121
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-217121 -n scheduled-stop-217121
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-217121 -n scheduled-stop-217121: exit status 7 (74.317271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-217121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-217121
--- PASS: TestScheduledStopUnix (118.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (224.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1810275761 start -p running-upgrade-413853 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1810275761 start -p running-upgrade-413853 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m7.539826357s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-413853 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-413853 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.982427502s)
helpers_test.go:175: Cleaning up "running-upgrade-413853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-413853
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-413853: (1.162866727s)
--- PASS: TestRunningBinaryUpgrade (224.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.08518ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-388162] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388162 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388162 --driver=kvm2  --container-runtime=containerd: (1m38.23252201s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-388162 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.48s)

                                                
                                    
x
+
TestPause/serial/Start (122.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-027171 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-027171 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m2.132965221s)
--- PASS: TestPause/serial/Start (122.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (79.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m18.235305965s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-388162 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-388162 status -o json: exit status 2 (290.303238ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-388162","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-388162
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-388162: (1.284016371s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (79.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-027171 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0229 18:29:36.796667   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-027171 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (8.313309716s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.33s)

                                                
                                    
x
+
TestPause/serial/Pause (1.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-027171 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-027171 --alsologtostderr -v=5: (1.587941554s)
--- PASS: TestPause/serial/Pause (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388162 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.278214084s)
--- PASS: TestNoKubernetes/serial/Start (28.28s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-027171 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-027171 --output=json --layout=cluster: exit status 2 (288.933056ms)

                                                
                                                
-- stdout --
	{"Name":"pause-027171","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-027171","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-027171 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-027171 --alsologtostderr -v=5
E0229 18:29:42.039124   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-027171 --alsologtostderr -v=5: (1.032610771s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-027171 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.550204322s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-388162 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-388162 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.217613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.102756228s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-388162
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-388162: (1.337348677s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388162 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388162 --driver=kvm2  --container-runtime=containerd: (28.206762217s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-388162 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-388162 "sudo systemctl is-active --quiet service kubelet": exit status 1 (386.889847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-387000 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-387000 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (114.944261ms)

                                                
                                                
-- stdout --
	* [false-387000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:30:57.358365   39606 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:30:57.358650   39606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:30:57.358660   39606 out.go:304] Setting ErrFile to fd 2...
	I0229 18:30:57.358666   39606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:30:57.358894   39606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
	I0229 18:30:57.359459   39606 out.go:298] Setting JSON to false
	I0229 18:30:57.360403   39606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4399,"bootTime":1709227059,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:30:57.360460   39606 start.go:139] virtualization: kvm guest
	I0229 18:30:57.362572   39606 out.go:177] * [false-387000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:30:57.364168   39606 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:30:57.365282   39606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:30:57.364220   39606 notify.go:220] Checking for updates...
	I0229 18:30:57.367546   39606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
	I0229 18:30:57.368706   39606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
	I0229 18:30:57.369733   39606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:30:57.370726   39606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:30:57.372248   39606 config.go:182] Loaded profile config "cert-expiration-829233": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:30:57.372374   39606 config.go:182] Loaded profile config "cert-options-153536": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:30:57.372482   39606 config.go:182] Loaded profile config "kubernetes-upgrade-907979": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0229 18:30:57.372590   39606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:30:57.407754   39606 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:30:57.408915   39606 start.go:299] selected driver: kvm2
	I0229 18:30:57.408929   39606 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:30:57.408940   39606 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:30:57.410755   39606 out.go:177] 
	W0229 18:30:57.411882   39606 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0229 18:30:57.413030   39606 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-387000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.208:8443
name: cert-expiration-829233
contexts:
- context:
cluster: cert-expiration-829233
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-829233
name: cert-expiration-829233
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-829233
user:
client-certificate: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.crt
client-key: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-387000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387000"

                                                
                                                
----------------------- debugLogs end: false-387000 [took: 3.123105615s] --------------------------------
helpers_test.go:175: Cleaning up "false-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-387000
--- PASS: TestNetworkPlugins/group/false (3.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (198.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.105204447 start -p stopped-upgrade-475131 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0229 18:31:33.750738   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.105204447 start -p stopped-upgrade-475131 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m21.256847071s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.105204447 -p stopped-upgrade-475131 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.105204447 -p stopped-upgrade-475131 stop: (2.369716822s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-475131 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-475131 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m54.602020375s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (198.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (211.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-644659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-644659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (3m31.537928604s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (211.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-475131
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-475131: (1.233608936s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-596503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0229 18:34:42.038972   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-596503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m1.180609028s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-596503 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [be68356e-c3ab-4b14-94d5-c5912d637f99] Pending
helpers_test.go:344: "busybox" [be68356e-c3ab-4b14-94d5-c5912d637f99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [be68356e-c3ab-4b14-94d5-c5912d637f99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00382544s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-596503 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-596503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-596503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108334619s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-596503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-596503 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-596503 --alsologtostderr -v=3: (1m32.256003363s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-644659 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a8367ca-b33f-4e47-8a76-88e7bbf2ec08] Pending
helpers_test.go:344: "busybox" [6a8367ca-b33f-4e47-8a76-88e7bbf2ec08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a8367ca-b33f-4e47-8a76-88e7bbf2ec08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005557029s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-644659 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-644659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-644659 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-644659 --alsologtostderr -v=3
E0229 18:36:33.751298   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-644659 --alsologtostderr -v=3: (1m32.251555257s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-459722 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-459722 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m38.803806863s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596503 -n embed-certs-596503
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596503 -n embed-certs-596503: exit status 7 (85.900924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-596503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (331.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-596503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-596503 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m31.138415973s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-596503 -n embed-certs-596503
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (331.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-561577 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-561577 --alsologtostderr -v=3: (1.360453999s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561577 -n old-k8s-version-561577: exit status 7 (75.218237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-561577 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-644659 -n no-preload-644659
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-644659 -n no-preload-644659: exit status 7 (91.681522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-644659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-644659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-644659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m47.923204728s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-644659 -n no-preload-644659
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-459722 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85f2e577-00bb-4762-8ae1-e6628f875a82] Pending
helpers_test.go:344: "busybox" [85f2e577-00bb-4762-8ae1-e6628f875a82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85f2e577-00bb-4762-8ae1-e6628f875a82] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004932384s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-459722 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-459722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-459722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102285952s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-459722 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-459722 --alsologtostderr -v=3
E0229 18:39:25.087746   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 18:39:42.039970   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-459722 --alsologtostderr -v=3: (1m32.262256609s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722: exit status 7 (75.72184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-459722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-459722 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0229 18:41:33.751229   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-459722 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m31.661072817s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bksdn" [5c1b64dd-afb3-4879-a37c-63ad1e67264f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bksdn" [5c1b64dd-afb3-4879-a37c-63ad1e67264f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.010869136s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bksdn" [5c1b64dd-afb3-4879-a37c-63ad1e67264f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006048502s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-596503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-596503 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-596503 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596503 -n embed-certs-596503
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596503 -n embed-certs-596503: exit status 2 (251.668442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596503 -n embed-certs-596503
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596503 -n embed-certs-596503: exit status 2 (248.188512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-596503 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-596503 -n embed-certs-596503
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-596503 -n embed-certs-596503
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-462109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-462109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (59.139698424s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6m29c" [f1abd264-ccc9-4235-b8f9-afa2f34f4183] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6m29c" [f1abd264-ccc9-4235-b8f9-afa2f34f4183] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.004724693s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6m29c" [f1abd264-ccc9-4235-b8f9-afa2f34f4183] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00469056s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-644659 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-462109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-462109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.297618686s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-644659 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-644659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-644659 -n no-preload-644659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-644659 -n no-preload-644659: exit status 2 (268.358708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-644659 -n no-preload-644659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-644659 -n no-preload-644659: exit status 2 (265.606731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-644659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-644659 -n no-preload-644659
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-644659 -n no-preload-644659
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-462109 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-462109 --alsologtostderr -v=3: (2.101363166s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-462109 -n newest-cni-462109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-462109 -n newest-cni-462109: exit status 7 (83.885406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-462109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-462109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-462109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (44.61108458s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-462109 -n newest-cni-462109
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (123.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0229 18:44:42.039306   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m3.718450578s)
--- PASS: TestNetworkPlugins/group/auto/Start (123.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-462109 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-462109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-462109 -n newest-cni-462109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-462109 -n newest-cni-462109: exit status 2 (253.751286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-462109 -n newest-cni-462109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-462109 -n newest-cni-462109: exit status 2 (265.456293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-462109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-462109 -n newest-cni-462109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-462109 -n newest-cni-462109
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m9.346944308s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ldm2h" [9b190788-d965-4ff8-987e-41d66734e19a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005095606s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ldm2h" [9b190788-d965-4ff8-987e-41d66734e19a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004778064s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-459722 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-459722 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-459722 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722: exit status 2 (267.444912ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722: exit status 2 (247.610843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-459722 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-459722 -n default-k8s-diff-port-459722
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0229 18:46:16.797106   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m41.957017744s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wrpbq" [4446d867-ccaf-4a2c-94be-2d35c7860e46] Running
E0229 18:46:20.602697   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.607928   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.618171   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.638396   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.678687   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.758963   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:20.919579   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:21.240476   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:21.880788   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:23.161853   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
E0229 18:46:25.723040   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005468608s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mx2vp" [54ebea8b-f39f-4354-8ee6-0b8b06d7d7a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mx2vp" [54ebea8b-f39f-4354-8ee6-0b8b06d7d7a9] Running
E0229 18:46:33.750467   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.328599495s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5pmz9" [f7a5c739-dcc6-42ed-b407-cb8826654a1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 18:46:30.844059   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/no-preload-644659/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5pmz9" [f7a5c739-dcc6-42ed-b407-cb8826654a1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.094046306s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.939159746s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m12.586891953s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5tvg6" [f7cb47fb-8e35-4a77-b8bc-bd58883454df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007887519s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7kznv" [72d90154-8d27-4291-92fe-12ef913c7109] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7kznv" [72d90154-8d27-4291-92fe-12ef913c7109] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004581432s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-44vdp" [a07a014a-17fc-422c-ad72-4c3e7f2c3920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-44vdp" [a07a014a-17fc-422c-ad72-4c3e7f2c3920] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005732763s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m37.790470945s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- nslookup kubernetes.default
E0229 18:48:34.028638   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.033984   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.044272   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.064556   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.104894   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.185097   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
E0229 18:48:34.345297   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0229 18:48:34.665452   13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/default-k8s-diff-port-459722/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-387000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m45.823810323s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9k5hf" [274f8023-b0d5-42b4-b592-6ff8896f7595] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9k5hf" [274f8023-b0d5-42b4-b592-6ff8896f7595] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004611659s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z6b2q" [2a133517-217d-41c9-90bf-bc4b4a78acda] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005415796s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v8gqz" [3588adc4-8fee-413d-9f67-a79de7f67fad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v8gqz" [3588adc4-8fee-413d-9f67-a79de7f67fad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004249374s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-387000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-387000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w22px" [e25b833f-d66f-4fc8-8f34-26f4a8aa7d63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w22px" [e25b833f-d66f-4fc8-8f34-26f4a8aa7d63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004199436s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-387000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/316)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
251 TestStartStop/group/disable-driver-mounts 0.14
271 TestNetworkPlugins/group/kubenet 3.26
279 TestNetworkPlugins/group/cilium 3.69
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-629549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-629549
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-387000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.208:8443
name: cert-expiration-829233
contexts:
- context:
cluster: cert-expiration-829233
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-829233
name: cert-expiration-829233
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-829233
user:
client-certificate: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.crt
client-key: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-387000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387000"

                                                
                                                
----------------------- debugLogs end: kubenet-387000 [took: 3.109363642s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-387000
--- SKIP: TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-387000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-387000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.208:8443
name: cert-expiration-829233
contexts:
- context:
cluster: cert-expiration-829233
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:29:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-829233
name: cert-expiration-829233
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-829233
user:
client-certificate: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.crt
client-key: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/cert-expiration-829233/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-387000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-387000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387000"

                                                
                                                
----------------------- debugLogs end: cilium-387000 [took: 3.554780629s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-387000
--- SKIP: TestNetworkPlugins/group/cilium (3.69s)

                                                
                                    
Copied to clipboard