Test Report: KVM_Linux_crio 20390

                    
                      1f24ff7f1f35c751c6a992fe7f61f220cc357745:2025-02-10:38293
                    
                

Test fail (11/321)

x
+
TestAddons/parallel/Ingress (155s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-692802 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-692802 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-692802 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b150f03f-0763-48b1-a6dd-5456e6ab3976] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b150f03f-0763-48b1-a6dd-5456e6ab3976] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003771249s
I0210 12:48:02.670836  588140 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-692802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.297155168s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-692802 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.213
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-692802 -n addons-692802
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 logs -n 25: (1.222843813s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-573306                                                                     | download-only-573306 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| delete  | -p download-only-754359                                                                     | download-only-754359 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| delete  | -p download-only-573306                                                                     | download-only-573306 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-261143 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |                     |
	|         | binary-mirror-261143                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41717                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-261143                                                                     | binary-mirror-261143 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |                     |
	|         | addons-692802                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |                     |
	|         | addons-692802                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-692802 --wait=true                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | -p addons-692802                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-692802 ip                                                                            | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:47 UTC | 10 Feb 25 12:48 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-692802 ssh curl -s                                                                   | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC | 10 Feb 25 12:48 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-692802 ssh cat                                                                       | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC | 10 Feb 25 12:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e0786582-1bf5-4756-b266-564a46774f86_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-692802 addons disable                                                                | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC | 10 Feb 25 12:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC | 10 Feb 25 12:48 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-692802 addons                                                                        | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:48 UTC | 10 Feb 25 12:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-692802 ip                                                                            | addons-692802        | jenkins | v1.35.0 | 10 Feb 25 12:50 UTC | 10 Feb 25 12:50 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:44:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:44:51.621491  588836 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:44:51.621755  588836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:51.621765  588836 out.go:358] Setting ErrFile to fd 2...
	I0210 12:44:51.621769  588836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:51.621984  588836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 12:44:51.622641  588836 out.go:352] Setting JSON to false
	I0210 12:44:51.623643  588836 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8837,"bootTime":1739182655,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:44:51.623757  588836 start.go:139] virtualization: kvm guest
	I0210 12:44:51.625764  588836 out.go:177] * [addons-692802] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:44:51.627280  588836 notify.go:220] Checking for updates...
	I0210 12:44:51.627309  588836 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:44:51.628727  588836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:44:51.630056  588836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:44:51.631415  588836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:44:51.632628  588836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:44:51.633739  588836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:44:51.635024  588836 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:44:51.667119  588836 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 12:44:51.668214  588836 start.go:297] selected driver: kvm2
	I0210 12:44:51.668226  588836 start.go:901] validating driver "kvm2" against <nil>
	I0210 12:44:51.668238  588836 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:44:51.669052  588836 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:51.669145  588836 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 12:44:51.684824  588836 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 12:44:51.684871  588836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:44:51.685172  588836 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:44:51.685216  588836 cni.go:84] Creating CNI manager for ""
	I0210 12:44:51.685278  588836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:44:51.685292  588836 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 12:44:51.685365  588836 start.go:340] cluster config:
	{Name:addons-692802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-692802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0210 12:44:51.685470  588836 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:51.687196  588836 out.go:177] * Starting "addons-692802" primary control-plane node in "addons-692802" cluster
	I0210 12:44:51.688843  588836 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:44:51.688878  588836 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 12:44:51.688892  588836 cache.go:56] Caching tarball of preloaded images
	I0210 12:44:51.688971  588836 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 12:44:51.688985  588836 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 12:44:51.689319  588836 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/config.json ...
	I0210 12:44:51.689344  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/config.json: {Name:mke76ddc1c88e5bac6f73305fda63b5195e65f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:44:51.689510  588836 start.go:360] acquireMachinesLock for addons-692802: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:44:51.689574  588836 start.go:364] duration metric: took 45.222µs to acquireMachinesLock for "addons-692802"
	I0210 12:44:51.689600  588836 start.go:93] Provisioning new machine with config: &{Name:addons-692802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-692802 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 12:44:51.689670  588836 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 12:44:51.691330  588836 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0210 12:44:51.691464  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:44:51.691519  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:44:51.706223  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0210 12:44:51.706692  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:44:51.707495  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:44:51.707538  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:44:51.708020  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:44:51.708246  588836 main.go:141] libmachine: (addons-692802) Calling .GetMachineName
	I0210 12:44:51.708426  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:44:51.708614  588836 start.go:159] libmachine.API.Create for "addons-692802" (driver="kvm2")
	I0210 12:44:51.708662  588836 client.go:168] LocalClient.Create starting
	I0210 12:44:51.708710  588836 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem
	I0210 12:44:51.988375  588836 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem
	I0210 12:44:52.231132  588836 main.go:141] libmachine: Running pre-create checks...
	I0210 12:44:52.231159  588836 main.go:141] libmachine: (addons-692802) Calling .PreCreateCheck
	I0210 12:44:52.231696  588836 main.go:141] libmachine: (addons-692802) Calling .GetConfigRaw
	I0210 12:44:52.232191  588836 main.go:141] libmachine: Creating machine...
	I0210 12:44:52.232207  588836 main.go:141] libmachine: (addons-692802) Calling .Create
	I0210 12:44:52.232374  588836 main.go:141] libmachine: (addons-692802) creating KVM machine...
	I0210 12:44:52.232394  588836 main.go:141] libmachine: (addons-692802) creating network...
	I0210 12:44:52.233723  588836 main.go:141] libmachine: (addons-692802) DBG | found existing default KVM network
	I0210 12:44:52.234501  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:52.234341  588858 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000215310}
	I0210 12:44:52.234531  588836 main.go:141] libmachine: (addons-692802) DBG | created network xml: 
	I0210 12:44:52.234541  588836 main.go:141] libmachine: (addons-692802) DBG | <network>
	I0210 12:44:52.234549  588836 main.go:141] libmachine: (addons-692802) DBG |   <name>mk-addons-692802</name>
	I0210 12:44:52.234557  588836 main.go:141] libmachine: (addons-692802) DBG |   <dns enable='no'/>
	I0210 12:44:52.234563  588836 main.go:141] libmachine: (addons-692802) DBG |   
	I0210 12:44:52.234569  588836 main.go:141] libmachine: (addons-692802) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0210 12:44:52.234574  588836 main.go:141] libmachine: (addons-692802) DBG |     <dhcp>
	I0210 12:44:52.234579  588836 main.go:141] libmachine: (addons-692802) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0210 12:44:52.234585  588836 main.go:141] libmachine: (addons-692802) DBG |     </dhcp>
	I0210 12:44:52.234602  588836 main.go:141] libmachine: (addons-692802) DBG |   </ip>
	I0210 12:44:52.234610  588836 main.go:141] libmachine: (addons-692802) DBG |   
	I0210 12:44:52.234616  588836 main.go:141] libmachine: (addons-692802) DBG | </network>
	I0210 12:44:52.234627  588836 main.go:141] libmachine: (addons-692802) DBG | 
	I0210 12:44:52.239892  588836 main.go:141] libmachine: (addons-692802) DBG | trying to create private KVM network mk-addons-692802 192.168.39.0/24...
	I0210 12:44:52.305034  588836 main.go:141] libmachine: (addons-692802) setting up store path in /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802 ...
	I0210 12:44:52.305081  588836 main.go:141] libmachine: (addons-692802) DBG | private KVM network mk-addons-692802 192.168.39.0/24 created
	I0210 12:44:52.305090  588836 main.go:141] libmachine: (addons-692802) building disk image from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 12:44:52.305121  588836 main.go:141] libmachine: (addons-692802) Downloading /home/jenkins/minikube-integration/20390-580861/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 12:44:52.305133  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:52.304948  588858 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:44:52.608025  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:52.607867  588858 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa...
	I0210 12:44:52.766660  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:52.766515  588858 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/addons-692802.rawdisk...
	I0210 12:44:52.766696  588836 main.go:141] libmachine: (addons-692802) DBG | Writing magic tar header
	I0210 12:44:52.766713  588836 main.go:141] libmachine: (addons-692802) DBG | Writing SSH key tar header
	I0210 12:44:52.766723  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:52.766654  588858 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802 ...
	I0210 12:44:52.766824  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802
	I0210 12:44:52.766858  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines
	I0210 12:44:52.766871  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802 (perms=drwx------)
	I0210 12:44:52.766896  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:44:52.766907  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861
	I0210 12:44:52.766918  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 12:44:52.766932  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home/jenkins
	I0210 12:44:52.766952  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines (perms=drwxr-xr-x)
	I0210 12:44:52.766963  588836 main.go:141] libmachine: (addons-692802) DBG | checking permissions on dir: /home
	I0210 12:44:52.766972  588836 main.go:141] libmachine: (addons-692802) DBG | skipping /home - not owner
	I0210 12:44:52.766983  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube (perms=drwxr-xr-x)
	I0210 12:44:52.766989  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins/minikube-integration/20390-580861 (perms=drwxrwxr-x)
	I0210 12:44:52.766997  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 12:44:52.767002  588836 main.go:141] libmachine: (addons-692802) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 12:44:52.767012  588836 main.go:141] libmachine: (addons-692802) creating domain...
	I0210 12:44:52.768047  588836 main.go:141] libmachine: (addons-692802) define libvirt domain using xml: 
	I0210 12:44:52.768082  588836 main.go:141] libmachine: (addons-692802) <domain type='kvm'>
	I0210 12:44:52.768096  588836 main.go:141] libmachine: (addons-692802)   <name>addons-692802</name>
	I0210 12:44:52.768105  588836 main.go:141] libmachine: (addons-692802)   <memory unit='MiB'>4000</memory>
	I0210 12:44:52.768115  588836 main.go:141] libmachine: (addons-692802)   <vcpu>2</vcpu>
	I0210 12:44:52.768122  588836 main.go:141] libmachine: (addons-692802)   <features>
	I0210 12:44:52.768132  588836 main.go:141] libmachine: (addons-692802)     <acpi/>
	I0210 12:44:52.768137  588836 main.go:141] libmachine: (addons-692802)     <apic/>
	I0210 12:44:52.768145  588836 main.go:141] libmachine: (addons-692802)     <pae/>
	I0210 12:44:52.768156  588836 main.go:141] libmachine: (addons-692802)     
	I0210 12:44:52.768165  588836 main.go:141] libmachine: (addons-692802)   </features>
	I0210 12:44:52.768176  588836 main.go:141] libmachine: (addons-692802)   <cpu mode='host-passthrough'>
	I0210 12:44:52.768207  588836 main.go:141] libmachine: (addons-692802)   
	I0210 12:44:52.768228  588836 main.go:141] libmachine: (addons-692802)   </cpu>
	I0210 12:44:52.768237  588836 main.go:141] libmachine: (addons-692802)   <os>
	I0210 12:44:52.768245  588836 main.go:141] libmachine: (addons-692802)     <type>hvm</type>
	I0210 12:44:52.768255  588836 main.go:141] libmachine: (addons-692802)     <boot dev='cdrom'/>
	I0210 12:44:52.768260  588836 main.go:141] libmachine: (addons-692802)     <boot dev='hd'/>
	I0210 12:44:52.768265  588836 main.go:141] libmachine: (addons-692802)     <bootmenu enable='no'/>
	I0210 12:44:52.768292  588836 main.go:141] libmachine: (addons-692802)   </os>
	I0210 12:44:52.768301  588836 main.go:141] libmachine: (addons-692802)   <devices>
	I0210 12:44:52.768312  588836 main.go:141] libmachine: (addons-692802)     <disk type='file' device='cdrom'>
	I0210 12:44:52.768328  588836 main.go:141] libmachine: (addons-692802)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/boot2docker.iso'/>
	I0210 12:44:52.768339  588836 main.go:141] libmachine: (addons-692802)       <target dev='hdc' bus='scsi'/>
	I0210 12:44:52.768350  588836 main.go:141] libmachine: (addons-692802)       <readonly/>
	I0210 12:44:52.768359  588836 main.go:141] libmachine: (addons-692802)     </disk>
	I0210 12:44:52.768378  588836 main.go:141] libmachine: (addons-692802)     <disk type='file' device='disk'>
	I0210 12:44:52.768400  588836 main.go:141] libmachine: (addons-692802)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 12:44:52.768421  588836 main.go:141] libmachine: (addons-692802)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/addons-692802.rawdisk'/>
	I0210 12:44:52.768435  588836 main.go:141] libmachine: (addons-692802)       <target dev='hda' bus='virtio'/>
	I0210 12:44:52.768448  588836 main.go:141] libmachine: (addons-692802)     </disk>
	I0210 12:44:52.768459  588836 main.go:141] libmachine: (addons-692802)     <interface type='network'>
	I0210 12:44:52.768470  588836 main.go:141] libmachine: (addons-692802)       <source network='mk-addons-692802'/>
	I0210 12:44:52.768480  588836 main.go:141] libmachine: (addons-692802)       <model type='virtio'/>
	I0210 12:44:52.768490  588836 main.go:141] libmachine: (addons-692802)     </interface>
	I0210 12:44:52.768501  588836 main.go:141] libmachine: (addons-692802)     <interface type='network'>
	I0210 12:44:52.768514  588836 main.go:141] libmachine: (addons-692802)       <source network='default'/>
	I0210 12:44:52.768525  588836 main.go:141] libmachine: (addons-692802)       <model type='virtio'/>
	I0210 12:44:52.768534  588836 main.go:141] libmachine: (addons-692802)     </interface>
	I0210 12:44:52.768546  588836 main.go:141] libmachine: (addons-692802)     <serial type='pty'>
	I0210 12:44:52.768554  588836 main.go:141] libmachine: (addons-692802)       <target port='0'/>
	I0210 12:44:52.768566  588836 main.go:141] libmachine: (addons-692802)     </serial>
	I0210 12:44:52.768575  588836 main.go:141] libmachine: (addons-692802)     <console type='pty'>
	I0210 12:44:52.768587  588836 main.go:141] libmachine: (addons-692802)       <target type='serial' port='0'/>
	I0210 12:44:52.768603  588836 main.go:141] libmachine: (addons-692802)     </console>
	I0210 12:44:52.768616  588836 main.go:141] libmachine: (addons-692802)     <rng model='virtio'>
	I0210 12:44:52.768628  588836 main.go:141] libmachine: (addons-692802)       <backend model='random'>/dev/random</backend>
	I0210 12:44:52.768648  588836 main.go:141] libmachine: (addons-692802)     </rng>
	I0210 12:44:52.768658  588836 main.go:141] libmachine: (addons-692802)     
	I0210 12:44:52.768677  588836 main.go:141] libmachine: (addons-692802)     
	I0210 12:44:52.768697  588836 main.go:141] libmachine: (addons-692802)   </devices>
	I0210 12:44:52.768709  588836 main.go:141] libmachine: (addons-692802) </domain>
	I0210 12:44:52.768719  588836 main.go:141] libmachine: (addons-692802) 
	I0210 12:44:52.774349  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:6a:f5:b1 in network default
	I0210 12:44:52.774845  588836 main.go:141] libmachine: (addons-692802) starting domain...
	I0210 12:44:52.774872  588836 main.go:141] libmachine: (addons-692802) ensuring networks are active...
	I0210 12:44:52.774880  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:52.775511  588836 main.go:141] libmachine: (addons-692802) Ensuring network default is active
	I0210 12:44:52.775823  588836 main.go:141] libmachine: (addons-692802) Ensuring network mk-addons-692802 is active
	I0210 12:44:52.776493  588836 main.go:141] libmachine: (addons-692802) getting domain XML...
	I0210 12:44:52.777168  588836 main.go:141] libmachine: (addons-692802) creating domain...
	I0210 12:44:54.163138  588836 main.go:141] libmachine: (addons-692802) waiting for IP...
	I0210 12:44:54.164096  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:54.164540  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:54.164662  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:54.164554  588858 retry.go:31] will retry after 290.269747ms: waiting for domain to come up
	I0210 12:44:54.456063  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:54.456612  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:54.456648  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:54.456553  588858 retry.go:31] will retry after 332.496771ms: waiting for domain to come up
	I0210 12:44:54.791158  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:54.791558  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:54.791590  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:54.791510  588858 retry.go:31] will retry after 326.794013ms: waiting for domain to come up
	I0210 12:44:55.119994  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:55.120463  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:55.120495  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:55.120413  588858 retry.go:31] will retry after 549.302599ms: waiting for domain to come up
	I0210 12:44:55.671059  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:55.671454  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:55.671480  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:55.671422  588858 retry.go:31] will retry after 531.869693ms: waiting for domain to come up
	I0210 12:44:56.205207  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:56.205575  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:56.205610  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:56.205567  588858 retry.go:31] will retry after 581.30212ms: waiting for domain to come up
	I0210 12:44:56.789034  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:56.789381  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:56.789437  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:56.789359  588858 retry.go:31] will retry after 1.036104814s: waiting for domain to come up
	I0210 12:44:57.827549  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:57.827909  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:57.827936  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:57.827857  588858 retry.go:31] will retry after 1.258993041s: waiting for domain to come up
	I0210 12:44:59.088196  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:44:59.088618  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:44:59.088649  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:44:59.088566  588858 retry.go:31] will retry after 1.773103379s: waiting for domain to come up
	I0210 12:45:00.864717  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:00.865124  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:45:00.865150  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:45:00.865085  588858 retry.go:31] will retry after 1.996288151s: waiting for domain to come up
	I0210 12:45:02.862778  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:02.863235  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:45:02.863265  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:45:02.863182  588858 retry.go:31] will retry after 2.064288561s: waiting for domain to come up
	I0210 12:45:04.929268  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:04.929757  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:45:04.929789  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:45:04.929716  588858 retry.go:31] will retry after 3.270885028s: waiting for domain to come up
	I0210 12:45:08.202417  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:08.202815  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:45:08.202852  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:45:08.202792  588858 retry.go:31] will retry after 2.836074778s: waiting for domain to come up
	I0210 12:45:11.042736  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:11.043223  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find current IP address of domain addons-692802 in network mk-addons-692802
	I0210 12:45:11.043258  588836 main.go:141] libmachine: (addons-692802) DBG | I0210 12:45:11.043186  588858 retry.go:31] will retry after 3.757780866s: waiting for domain to come up
	I0210 12:45:14.804538  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:14.804828  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has current primary IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:14.804867  588836 main.go:141] libmachine: (addons-692802) found domain IP: 192.168.39.213
	I0210 12:45:14.804879  588836 main.go:141] libmachine: (addons-692802) reserving static IP address...
	I0210 12:45:14.805171  588836 main.go:141] libmachine: (addons-692802) DBG | unable to find host DHCP lease matching {name: "addons-692802", mac: "52:54:00:13:9a:c4", ip: "192.168.39.213"} in network mk-addons-692802
	I0210 12:45:14.877319  588836 main.go:141] libmachine: (addons-692802) DBG | Getting to WaitForSSH function...
	I0210 12:45:14.877359  588836 main.go:141] libmachine: (addons-692802) reserved static IP address 192.168.39.213 for domain addons-692802
	I0210 12:45:14.877375  588836 main.go:141] libmachine: (addons-692802) waiting for SSH...
	I0210 12:45:14.879667  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:14.880083  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:14.880123  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:14.880342  588836 main.go:141] libmachine: (addons-692802) DBG | Using SSH client type: external
	I0210 12:45:14.880381  588836 main.go:141] libmachine: (addons-692802) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa (-rw-------)
	I0210 12:45:14.880450  588836 main.go:141] libmachine: (addons-692802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 12:45:14.880491  588836 main.go:141] libmachine: (addons-692802) DBG | About to run SSH command:
	I0210 12:45:14.880503  588836 main.go:141] libmachine: (addons-692802) DBG | exit 0
	I0210 12:45:15.008401  588836 main.go:141] libmachine: (addons-692802) DBG | SSH cmd err, output: <nil>: 
	I0210 12:45:15.008622  588836 main.go:141] libmachine: (addons-692802) KVM machine creation complete
	I0210 12:45:15.009001  588836 main.go:141] libmachine: (addons-692802) Calling .GetConfigRaw
	I0210 12:45:15.009637  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:15.009899  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:15.010071  588836 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 12:45:15.010084  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:15.011428  588836 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 12:45:15.011445  588836 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 12:45:15.011450  588836 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 12:45:15.011456  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.013634  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.013935  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.013968  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.014204  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.014368  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.014592  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.014737  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.014953  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:15.015214  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:15.015231  588836 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 12:45:15.123669  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:45:15.123699  588836 main.go:141] libmachine: Detecting the provisioner...
	I0210 12:45:15.123709  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.126358  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.126719  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.126749  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.126881  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.127104  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.127281  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.127430  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.127611  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:15.127784  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:15.127794  588836 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 12:45:15.237198  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 12:45:15.237302  588836 main.go:141] libmachine: found compatible host: buildroot
	I0210 12:45:15.237311  588836 main.go:141] libmachine: Provisioning with buildroot...
	I0210 12:45:15.237319  588836 main.go:141] libmachine: (addons-692802) Calling .GetMachineName
	I0210 12:45:15.237598  588836 buildroot.go:166] provisioning hostname "addons-692802"
	I0210 12:45:15.237624  588836 main.go:141] libmachine: (addons-692802) Calling .GetMachineName
	I0210 12:45:15.237851  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.240208  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.240502  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.240532  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.240665  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.240840  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.240983  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.241105  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.241261  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:15.241448  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:15.241459  588836 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-692802 && echo "addons-692802" | sudo tee /etc/hostname
	I0210 12:45:15.362830  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-692802
	
	I0210 12:45:15.362856  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.365360  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.365649  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.365677  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.365870  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.366052  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.366187  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.366307  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.366489  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:15.366722  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:15.366746  588836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-692802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-692802/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-692802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:45:15.481392  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:45:15.481423  588836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 12:45:15.481466  588836 buildroot.go:174] setting up certificates
	I0210 12:45:15.481480  588836 provision.go:84] configureAuth start
	I0210 12:45:15.481492  588836 main.go:141] libmachine: (addons-692802) Calling .GetMachineName
	I0210 12:45:15.481717  588836 main.go:141] libmachine: (addons-692802) Calling .GetIP
	I0210 12:45:15.484546  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.484801  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.484838  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.485001  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.486981  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.487246  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.487275  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.487422  588836 provision.go:143] copyHostCerts
	I0210 12:45:15.487515  588836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 12:45:15.487634  588836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 12:45:15.487707  588836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 12:45:15.487773  588836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.addons-692802 san=[127.0.0.1 192.168.39.213 addons-692802 localhost minikube]
	I0210 12:45:15.621040  588836 provision.go:177] copyRemoteCerts
	I0210 12:45:15.621114  588836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:45:15.621143  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.624103  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.624531  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.624562  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.624730  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.624913  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.625092  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.625220  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:15.706409  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 12:45:15.730956  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 12:45:15.755150  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 12:45:15.778857  588836 provision.go:87] duration metric: took 297.361801ms to configureAuth
	I0210 12:45:15.778885  588836 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:45:15.779097  588836 config.go:182] Loaded profile config "addons-692802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:45:15.779200  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:15.781963  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.782297  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:15.782320  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:15.782490  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:15.782668  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.782839  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:15.783008  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:15.783172  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:15.783336  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:15.783349  588836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 12:45:16.007991  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 12:45:16.008027  588836 main.go:141] libmachine: Checking connection to Docker...
	I0210 12:45:16.008039  588836 main.go:141] libmachine: (addons-692802) Calling .GetURL
	I0210 12:45:16.009351  588836 main.go:141] libmachine: (addons-692802) DBG | using libvirt version 6000000
	I0210 12:45:16.011548  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.011905  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.011949  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.012153  588836 main.go:141] libmachine: Docker is up and running!
	I0210 12:45:16.012170  588836 main.go:141] libmachine: Reticulating splines...
	I0210 12:45:16.012178  588836 client.go:171] duration metric: took 24.303503361s to LocalClient.Create
	I0210 12:45:16.012216  588836 start.go:167] duration metric: took 24.303603572s to libmachine.API.Create "addons-692802"
	I0210 12:45:16.012231  588836 start.go:293] postStartSetup for "addons-692802" (driver="kvm2")
	I0210 12:45:16.012248  588836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:45:16.012272  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:16.012540  588836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:45:16.012566  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:16.014568  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.014963  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.014990  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.015151  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:16.015332  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:16.015469  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:16.015661  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:16.098956  588836 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:45:16.103475  588836 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:45:16.103519  588836 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 12:45:16.103598  588836 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 12:45:16.103643  588836 start.go:296] duration metric: took 91.390516ms for postStartSetup
	I0210 12:45:16.103694  588836 main.go:141] libmachine: (addons-692802) Calling .GetConfigRaw
	I0210 12:45:16.104399  588836 main.go:141] libmachine: (addons-692802) Calling .GetIP
	I0210 12:45:16.106877  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.107184  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.107215  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.107482  588836 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/config.json ...
	I0210 12:45:16.107645  588836 start.go:128] duration metric: took 24.417963538s to createHost
	I0210 12:45:16.107668  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:16.109745  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.110016  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.110042  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.110134  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:16.110310  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:16.110424  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:16.110545  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:16.110707  588836 main.go:141] libmachine: Using SSH client type: native
	I0210 12:45:16.110859  588836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0210 12:45:16.110869  588836 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:45:16.216942  588836 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739191516.179397572
	
	I0210 12:45:16.216973  588836 fix.go:216] guest clock: 1739191516.179397572
	I0210 12:45:16.216981  588836 fix.go:229] Guest: 2025-02-10 12:45:16.179397572 +0000 UTC Remote: 2025-02-10 12:45:16.107656784 +0000 UTC m=+24.523972368 (delta=71.740788ms)
	I0210 12:45:16.217017  588836 fix.go:200] guest clock delta is within tolerance: 71.740788ms
	I0210 12:45:16.217028  588836 start.go:83] releasing machines lock for "addons-692802", held for 24.527437206s
	I0210 12:45:16.217060  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:16.217386  588836 main.go:141] libmachine: (addons-692802) Calling .GetIP
	I0210 12:45:16.219946  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.220269  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.220324  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.220521  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:16.221051  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:16.221237  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:16.221327  588836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 12:45:16.221375  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:16.221447  588836 ssh_runner.go:195] Run: cat /version.json
	I0210 12:45:16.221475  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:16.223840  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.224052  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.224195  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.224228  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.224401  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:16.224427  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:16.224447  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:16.224600  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:16.224711  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:16.224723  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:16.224866  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:16.224884  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:16.225050  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:16.225052  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:16.301582  588836 ssh_runner.go:195] Run: systemctl --version
	I0210 12:45:16.325823  588836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 12:45:16.481519  588836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 12:45:16.487419  588836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:45:16.487497  588836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:45:16.503358  588836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:45:16.503393  588836 start.go:495] detecting cgroup driver to use...
	I0210 12:45:16.503462  588836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:45:16.519746  588836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:45:16.534341  588836 docker.go:217] disabling cri-docker service (if available) ...
	I0210 12:45:16.534411  588836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 12:45:16.548379  588836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 12:45:16.561960  588836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 12:45:16.676588  588836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 12:45:16.814794  588836 docker.go:233] disabling docker service ...
	I0210 12:45:16.814884  588836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 12:45:16.830175  588836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 12:45:16.844059  588836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 12:45:16.996351  588836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 12:45:17.131510  588836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 12:45:17.145621  588836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:45:17.164240  588836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 12:45:17.164329  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.174419  588836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 12:45:17.174496  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.184565  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.194474  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.204367  588836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:45:17.214269  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.224105  588836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.240959  588836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 12:45:17.250946  588836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:45:17.259653  588836 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:45:17.259706  588836 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:45:17.271773  588836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:45:17.280796  588836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:45:17.403041  588836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 12:45:17.496529  588836 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 12:45:17.496646  588836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 12:45:17.501697  588836 start.go:563] Will wait 60s for crictl version
	I0210 12:45:17.501776  588836 ssh_runner.go:195] Run: which crictl
	I0210 12:45:17.505648  588836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:45:17.545321  588836 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 12:45:17.545436  588836 ssh_runner.go:195] Run: crio --version
	I0210 12:45:17.572549  588836 ssh_runner.go:195] Run: crio --version
	I0210 12:45:17.601141  588836 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 12:45:17.602438  588836 main.go:141] libmachine: (addons-692802) Calling .GetIP
	I0210 12:45:17.604996  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:17.605305  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:17.605338  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:17.605574  588836 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 12:45:17.609619  588836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:45:17.624479  588836 kubeadm.go:883] updating cluster {Name:addons-692802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-692802 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:45:17.624593  588836 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:45:17.624633  588836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:45:17.657412  588836 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 12:45:17.657483  588836 ssh_runner.go:195] Run: which lz4
	I0210 12:45:17.661533  588836 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 12:45:17.665548  588836 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 12:45:17.665578  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 12:45:19.003813  588836 crio.go:462] duration metric: took 1.342306281s to copy over tarball
	I0210 12:45:19.003901  588836 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 12:45:21.196786  588836 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.192839618s)
	I0210 12:45:21.196832  588836 crio.go:469] duration metric: took 2.192982472s to extract the tarball
	I0210 12:45:21.196844  588836 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 12:45:21.234440  588836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:45:21.275300  588836 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 12:45:21.275327  588836 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:45:21.275336  588836 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.32.1 crio true true} ...
	I0210 12:45:21.275461  588836 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-692802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-692802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:45:21.275529  588836 ssh_runner.go:195] Run: crio config
	I0210 12:45:21.324474  588836 cni.go:84] Creating CNI manager for ""
	I0210 12:45:21.324500  588836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:45:21.324512  588836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:45:21.324535  588836 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-692802 NodeName:addons-692802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:45:21.324659  588836 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-692802"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.213"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:45:21.324722  588836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:45:21.334852  588836 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:45:21.334933  588836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:45:21.344352  588836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0210 12:45:21.360671  588836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:45:21.377573  588836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0210 12:45:21.394100  588836 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0210 12:45:21.397942  588836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:45:21.409923  588836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:45:21.541535  588836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:45:21.559821  588836 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802 for IP: 192.168.39.213
	I0210 12:45:21.559855  588836 certs.go:194] generating shared ca certs ...
	I0210 12:45:21.559879  588836 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:21.560077  588836 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 12:45:21.733387  588836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt ...
	I0210 12:45:21.733420  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt: {Name:mk66563008cfc52b4e8ebb58462f656198e46cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:21.733594  588836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key ...
	I0210 12:45:21.733605  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key: {Name:mk7ecf1c9141794108ead2958832372c892edaa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:21.733678  588836 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 12:45:22.020841  588836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt ...
	I0210 12:45:22.020883  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt: {Name:mkcba6f5dcee4e76854fd894fe6d7f2fa2e360a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.021067  588836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key ...
	I0210 12:45:22.021079  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key: {Name:mk7e18c0343427e06fe253ed421039c91544064c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.021150  588836 certs.go:256] generating profile certs ...
	I0210 12:45:22.021213  588836 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.key
	I0210 12:45:22.021228  588836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt with IP's: []
	I0210 12:45:22.234354  588836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt ...
	I0210 12:45:22.234387  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: {Name:mk87e4be221d1f66a2e4f0e016a4cd17168f51a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.234550  588836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.key ...
	I0210 12:45:22.234562  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.key: {Name:mk48ed04c0be736d9d3988aaf65d26b0a3978a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.234629  588836 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key.0fb86438
	I0210 12:45:22.234647  588836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt.0fb86438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.213]
	I0210 12:45:22.464639  588836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt.0fb86438 ...
	I0210 12:45:22.464674  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt.0fb86438: {Name:mkc01414792c786ee536cb8325a1c1a96e350c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.464850  588836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key.0fb86438 ...
	I0210 12:45:22.464864  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key.0fb86438: {Name:mkfe85488e9ff79a5a7eeaee8ce47e2bb04499b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.464944  588836 certs.go:381] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt.0fb86438 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt
	I0210 12:45:22.465017  588836 certs.go:385] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key.0fb86438 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key
	I0210 12:45:22.465063  588836 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.key
	I0210 12:45:22.465082  588836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.crt with IP's: []
	I0210 12:45:22.556839  588836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.crt ...
	I0210 12:45:22.556879  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.crt: {Name:mkb06695ce4b46895124e6198c852f96928ac03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.557052  588836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.key ...
	I0210 12:45:22.557067  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.key: {Name:mk840b554bf0a218191ff8e58f3d01eeaa0cbf74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:22.557242  588836 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 12:45:22.557284  588836 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 12:45:22.557307  588836 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 12:45:22.557333  588836 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 12:45:22.557912  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:45:22.583818  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:45:22.608650  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:45:22.633716  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 12:45:22.658430  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 12:45:22.683592  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 12:45:22.707936  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:45:22.732039  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 12:45:22.755865  588836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:45:22.779638  588836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:45:22.797000  588836 ssh_runner.go:195] Run: openssl version
	I0210 12:45:22.802922  588836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:45:22.814423  588836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:45:22.818981  588836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:45:22.819030  588836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:45:22.824948  588836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:45:22.836686  588836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:45:22.840847  588836 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:45:22.840909  588836 kubeadm.go:392] StartCluster: {Name:addons-692802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-692802 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:45:22.841035  588836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 12:45:22.841095  588836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 12:45:22.881627  588836 cri.go:89] found id: ""
	I0210 12:45:22.881705  588836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:45:22.894358  588836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:45:22.906407  588836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:45:22.918311  588836 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:45:22.918334  588836 kubeadm.go:157] found existing configuration files:
	
	I0210 12:45:22.918379  588836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:45:22.929681  588836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:45:22.929736  588836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:45:22.941341  588836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:45:22.951030  588836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:45:22.951086  588836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:45:22.961092  588836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:45:22.971181  588836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:45:22.971237  588836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:45:22.982231  588836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:45:22.993753  588836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:45:22.993815  588836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:45:23.004641  588836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 12:45:23.058874  588836 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 12:45:23.059125  588836 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 12:45:23.166766  588836 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 12:45:23.166963  588836 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 12:45:23.167077  588836 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 12:45:23.177974  588836 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:45:23.353782  588836 out.go:235]   - Generating certificates and keys ...
	I0210 12:45:23.353883  588836 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 12:45:23.353969  588836 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 12:45:23.354055  588836 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 12:45:23.354107  588836 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 12:45:23.505715  588836 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 12:45:23.738846  588836 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 12:45:23.818408  588836 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 12:45:23.818529  588836 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-692802 localhost] and IPs [192.168.39.213 127.0.0.1 ::1]
	I0210 12:45:24.041264  588836 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 12:45:24.041448  588836 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-692802 localhost] and IPs [192.168.39.213 127.0.0.1 ::1]
	I0210 12:45:24.268190  588836 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 12:45:24.590861  588836 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 12:45:24.892762  588836 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 12:45:24.892965  588836 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:45:25.126326  588836 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:45:25.349536  588836 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:45:25.735754  588836 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:45:25.819244  588836 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:45:25.935811  588836 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:45:25.936339  588836 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:45:25.940938  588836 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:45:25.943083  588836 out.go:235]   - Booting up control plane ...
	I0210 12:45:25.943234  588836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:45:25.943348  588836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:45:25.943435  588836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:45:25.959007  588836 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:45:25.968221  588836 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:45:25.968294  588836 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 12:45:26.101895  588836 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 12:45:26.102082  588836 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:45:26.603374  588836 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63781ms
	I0210 12:45:26.603479  588836 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 12:45:31.602253  588836 kubeadm.go:310] [api-check] The API server is healthy after 5.00246572s
	I0210 12:45:31.957007  588836 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 12:45:31.982852  588836 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 12:45:32.009699  588836 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 12:45:32.009971  588836 kubeadm.go:310] [mark-control-plane] Marking the node addons-692802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 12:45:32.021939  588836 kubeadm.go:310] [bootstrap-token] Using token: zdor4g.91scm0r9kiuc7rys
	I0210 12:45:32.023269  588836 out.go:235]   - Configuring RBAC rules ...
	I0210 12:45:32.023417  588836 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 12:45:32.029896  588836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 12:45:32.040409  588836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 12:45:32.047448  588836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 12:45:32.052421  588836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 12:45:32.055976  588836 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 12:45:32.174729  588836 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 12:45:32.605727  588836 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 12:45:33.175137  588836 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 12:45:33.176156  588836 kubeadm.go:310] 
	I0210 12:45:33.176238  588836 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 12:45:33.176250  588836 kubeadm.go:310] 
	I0210 12:45:33.176349  588836 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 12:45:33.176359  588836 kubeadm.go:310] 
	I0210 12:45:33.176395  588836 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 12:45:33.176511  588836 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 12:45:33.176595  588836 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 12:45:33.176612  588836 kubeadm.go:310] 
	I0210 12:45:33.176687  588836 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 12:45:33.176698  588836 kubeadm.go:310] 
	I0210 12:45:33.176763  588836 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 12:45:33.176773  588836 kubeadm.go:310] 
	I0210 12:45:33.176866  588836 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 12:45:33.176998  588836 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 12:45:33.177106  588836 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 12:45:33.177122  588836 kubeadm.go:310] 
	I0210 12:45:33.177235  588836 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 12:45:33.177342  588836 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 12:45:33.177352  588836 kubeadm.go:310] 
	I0210 12:45:33.177526  588836 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zdor4g.91scm0r9kiuc7rys \
	I0210 12:45:33.177687  588836 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 \
	I0210 12:45:33.177742  588836 kubeadm.go:310] 	--control-plane 
	I0210 12:45:33.177752  588836 kubeadm.go:310] 
	I0210 12:45:33.177823  588836 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 12:45:33.177830  588836 kubeadm.go:310] 
	I0210 12:45:33.177933  588836 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zdor4g.91scm0r9kiuc7rys \
	I0210 12:45:33.178027  588836 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 
	I0210 12:45:33.178621  588836 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:45:33.178728  588836 cni.go:84] Creating CNI manager for ""
	I0210 12:45:33.178751  588836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:45:33.181426  588836 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 12:45:33.182739  588836 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 12:45:33.195003  588836 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 12:45:33.220029  588836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:45:33.220118  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:33.220150  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-692802 minikube.k8s.io/updated_at=2025_02_10T12_45_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=addons-692802 minikube.k8s.io/primary=true
	I0210 12:45:33.391192  588836 ops.go:34] apiserver oom_adj: -16
	I0210 12:45:33.391194  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:33.891288  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:34.391516  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:34.892197  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:35.392063  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:35.891791  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:36.392025  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:36.891684  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:37.392156  588836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:45:37.475305  588836 kubeadm.go:1113] duration metric: took 4.255263779s to wait for elevateKubeSystemPrivileges
	I0210 12:45:37.475340  588836 kubeadm.go:394] duration metric: took 14.634440291s to StartCluster
	I0210 12:45:37.475362  588836 settings.go:142] acquiring lock: {Name:mk7daa7e5390489a50205707c4b69542e21eb74b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:37.475501  588836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:45:37.475959  588836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:45:37.476407  588836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 12:45:37.476426  588836 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 12:45:37.476508  588836 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0210 12:45:37.476634  588836 addons.go:69] Setting yakd=true in profile "addons-692802"
	I0210 12:45:37.476645  588836 config.go:182] Loaded profile config "addons-692802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:45:37.476658  588836 addons.go:238] Setting addon yakd=true in "addons-692802"
	I0210 12:45:37.476662  588836 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-692802"
	I0210 12:45:37.476646  588836 addons.go:69] Setting gcp-auth=true in profile "addons-692802"
	I0210 12:45:37.476698  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.476694  588836 addons.go:69] Setting cloud-spanner=true in profile "addons-692802"
	I0210 12:45:37.476712  588836 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-692802"
	I0210 12:45:37.476713  588836 mustload.go:65] Loading cluster: addons-692802
	I0210 12:45:37.476731  588836 addons.go:238] Setting addon cloud-spanner=true in "addons-692802"
	I0210 12:45:37.476747  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.476766  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.476759  588836 addons.go:69] Setting default-storageclass=true in profile "addons-692802"
	I0210 12:45:37.476774  588836 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-692802"
	I0210 12:45:37.476797  588836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-692802"
	I0210 12:45:37.476818  588836 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-692802"
	I0210 12:45:37.476850  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.476892  588836 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-692802"
	I0210 12:45:37.477463  588836 config.go:182] Loaded profile config "addons-692802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:45:37.477488  588836 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-692802"
	I0210 12:45:37.477452  588836 addons.go:69] Setting ingress-dns=true in profile "addons-692802"
	I0210 12:45:37.477534  588836 addons.go:238] Setting addon ingress-dns=true in "addons-692802"
	I0210 12:45:37.477582  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.477993  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.478011  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.478039  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.478103  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.478190  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.478278  588836 out.go:177] * Verifying Kubernetes components...
	I0210 12:45:37.478368  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.478415  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.478544  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.477462  588836 addons.go:69] Setting inspektor-gadget=true in profile "addons-692802"
	I0210 12:45:37.478663  588836 addons.go:238] Setting addon inspektor-gadget=true in "addons-692802"
	I0210 12:45:37.478715  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.478711  588836 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-692802"
	I0210 12:45:37.478737  588836 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-692802"
	I0210 12:45:37.479056  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.479094  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.479266  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.479271  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.479305  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.479324  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.477472  588836 addons.go:69] Setting metrics-server=true in profile "addons-692802"
	I0210 12:45:37.480101  588836 addons.go:238] Setting addon metrics-server=true in "addons-692802"
	I0210 12:45:37.480118  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.480134  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.480150  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.480351  588836 addons.go:69] Setting registry=true in profile "addons-692802"
	I0210 12:45:37.480367  588836 addons.go:238] Setting addon registry=true in "addons-692802"
	I0210 12:45:37.480395  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.480735  588836 addons.go:69] Setting storage-provisioner=true in profile "addons-692802"
	I0210 12:45:37.480791  588836 addons.go:238] Setting addon storage-provisioner=true in "addons-692802"
	I0210 12:45:37.477432  588836 addons.go:69] Setting ingress=true in profile "addons-692802"
	I0210 12:45:37.480875  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.480929  588836 addons.go:69] Setting volcano=true in profile "addons-692802"
	I0210 12:45:37.480966  588836 addons.go:238] Setting addon volcano=true in "addons-692802"
	I0210 12:45:37.481363  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.480818  588836 addons.go:69] Setting volumesnapshots=true in profile "addons-692802"
	I0210 12:45:37.481610  588836 addons.go:238] Setting addon volumesnapshots=true in "addons-692802"
	I0210 12:45:37.481018  588836 addons.go:238] Setting addon ingress=true in "addons-692802"
	I0210 12:45:37.481691  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.481234  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.481332  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.484567  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.485735  588836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:45:37.485833  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.499980  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0210 12:45:37.500736  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.501478  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.501504  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.502004  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.502792  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.502845  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.503714  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I0210 12:45:37.503924  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0210 12:45:37.504352  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.504728  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.504953  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.504990  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.505440  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.505572  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.505606  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.505676  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.506011  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.506231  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.508495  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0210 12:45:37.508953  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.509494  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.509508  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.511208  588836 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-692802"
	I0210 12:45:37.511271  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.511778  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.511819  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.512219  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0210 12:45:37.512352  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.512516  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0210 12:45:37.513051  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.513132  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.513252  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.513301  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.513770  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.513792  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.514199  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.514227  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.514261  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.514622  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.514763  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.514792  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.516882  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.516927  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.517503  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.517539  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.517592  588836 addons.go:238] Setting addon default-storageclass=true in "addons-692802"
	I0210 12:45:37.517630  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.517656  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.517704  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.518073  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.518107  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.518231  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.518258  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.518869  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.518904  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.519619  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.519658  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.525590  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.525647  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.525661  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.525704  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.525973  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0210 12:45:37.526596  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.527207  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.527248  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.527850  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.528614  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.528661  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.546034  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0210 12:45:37.546040  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0210 12:45:37.546721  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.546794  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.547439  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.547461  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.547560  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.547594  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.547866  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.548001  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.548643  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.548691  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.548995  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.560957  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.561115  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0210 12:45:37.561290  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0210 12:45:37.561499  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0210 12:45:37.561662  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.561791  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0210 12:45:37.561814  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.562380  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.562387  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.562403  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.562409  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.562604  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0210 12:45:37.562822  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.563485  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.563530  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.563540  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.564086  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.564107  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.564179  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.564609  588836 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0210 12:45:37.564853  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.564892  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.564975  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0210 12:45:37.565483  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.565556  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.565616  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.565678  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.566189  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.566230  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.566274  588836 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:45:37.566291  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0210 12:45:37.566314  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.566396  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.566811  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.566858  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.567270  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.568085  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.568536  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.568557  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.568108  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.569308  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0210 12:45:37.569339  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.569680  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.569761  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.570107  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.570186  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.570513  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.570652  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.571364  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.571383  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.571789  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.571832  588836 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0210 12:45:37.572204  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.572485  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.573086  588836 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0210 12:45:37.573139  588836 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0210 12:45:37.573175  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.573312  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.573915  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.575261  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.576089  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.576296  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.576361  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.576389  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:37.576825  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.577217  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.576883  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.577035  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.577720  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.577884  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.577983  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.577991  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.578026  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.578314  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.580150  588836 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0210 12:45:37.580985  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37401
	I0210 12:45:37.581483  588836 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0210 12:45:37.581505  588836 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0210 12:45:37.581525  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.581647  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.582592  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.582613  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.582727  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0210 12:45:37.583291  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.583744  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.583925  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.583940  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.584409  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.585121  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.585174  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.585500  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.585531  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.585554  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.585673  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.586231  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.586287  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.586667  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.586741  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0210 12:45:37.587103  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.587272  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.590531  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0210 12:45:37.591073  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.591705  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.591731  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.592179  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.592818  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.592866  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.593147  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.593720  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.593739  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.594219  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.594511  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.595058  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0210 12:45:37.595564  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.596138  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.596156  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.596524  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.596622  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.596885  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.598545  588836 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0210 12:45:37.598546  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.599894  588836 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0210 12:45:37.599914  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0210 12:45:37.599935  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.600410  588836 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0210 12:45:37.602150  588836 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:45:37.602176  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0210 12:45:37.602196  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.604503  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0210 12:45:37.604651  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.604985  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.605425  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.605465  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.605717  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.605917  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.606076  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.606230  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.607119  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0210 12:45:37.607714  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.607805  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.608124  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.608140  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.608235  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.608255  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.608529  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.608544  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.608614  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.608739  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.608809  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.608881  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.609155  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.609351  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.609408  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.610006  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.610051  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.614927  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0210 12:45:37.615485  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.616070  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.616091  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.616530  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.616747  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.618500  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.618946  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0210 12:45:37.619661  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.620389  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.620415  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.620589  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0210 12:45:37.620800  588836 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0210 12:45:37.621117  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.621213  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.621727  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.621746  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.621811  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.622075  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0210 12:45:37.622081  588836 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:45:37.622100  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0210 12:45:37.622119  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.622513  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.622530  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.622870  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.623061  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.623077  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.623461  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.623713  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.623993  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0210 12:45:37.624120  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:37.624167  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:37.624540  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.625058  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.625238  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.625276  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.625677  588836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:45:37.625723  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.625976  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.626632  588836 out.go:177]   - Using image docker.io/registry:2.8.3
	I0210 12:45:37.627474  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0210 12:45:37.627738  588836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0210 12:45:37.627973  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.628538  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.628553  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.628794  588836 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0210 12:45:37.628921  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.628968  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.629329  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.629586  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0210 12:45:37.629776  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I0210 12:45:37.630159  588836 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0210 12:45:37.630178  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0210 12:45:37.630198  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.630252  588836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:45:37.630716  588836 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0210 12:45:37.631102  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.631566  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.631832  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.631907  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.632378  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.632836  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.632442  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.632857  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.632724  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.632750  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.632903  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.633141  588836 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:45:37.633236  588836 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 12:45:37.633254  588836 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 12:45:37.633273  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.633323  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.633347  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.633367  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.633967  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.634027  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.634075  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.634252  588836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:45:37.633159  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0210 12:45:37.635184  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.635699  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.636074  588836 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:45:37.636127  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 12:45:37.636151  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.636229  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.636844  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.636933  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0210 12:45:37.637104  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.637160  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.637175  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.638021  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.638844  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.639105  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.639437  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.639541  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0210 12:45:37.640718  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0210 12:45:37.640745  588836 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0210 12:45:37.640831  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.641721  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.641742  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.641760  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.641774  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.642285  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.642507  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.643857  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.646014  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.646044  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.646062  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.646090  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.646110  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.646127  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.646270  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.646292  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.646367  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.646504  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.646526  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.646543  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.646887  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.646949  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.647009  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.647061  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0210 12:45:37.647222  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.647406  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.647437  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.647411  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0210 12:45:37.647634  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.647761  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.647992  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.648013  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.648039  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:37.648071  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.648070  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.648386  588836 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0210 12:45:37.648740  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.648822  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:37.648841  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:37.648968  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.649154  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.649310  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:37.649328  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.649881  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:37.649880  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.649926  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.649957  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	W0210 12:45:37.649994  588836 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36966->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.650027  588836 retry.go:31] will retry after 315.535889ms: ssh: handshake failed: read tcp 192.168.39.1:36966->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.651013  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.651150  588836 out.go:177]   - Using image docker.io/busybox:stable
	I0210 12:45:37.651320  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.651463  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:37.651475  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:37.652394  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0210 12:45:37.652517  588836 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:45:37.652539  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0210 12:45:37.652558  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.652694  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:37.652759  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:37.652780  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:37.652786  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:37.652793  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:37.652803  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:37.653159  588836 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 12:45:37.653174  588836 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 12:45:37.653186  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.653212  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:37.653219  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:37.653228  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	W0210 12:45:37.653297  588836 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0210 12:45:37.655822  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0210 12:45:37.656190  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.656933  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.656968  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.657006  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.656961  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.657393  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.657414  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.657432  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.657595  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.657606  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.657714  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.657731  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.657861  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.657952  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:37.658265  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0210 12:45:37.658489  588836 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36968->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.658511  588836 retry.go:31] will retry after 132.986687ms: ssh: handshake failed: read tcp 192.168.39.1:36968->192.168.39.213:22: read: connection reset by peer
	W0210 12:45:37.658577  588836 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36976->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.658585  588836 retry.go:31] will retry after 320.200133ms: ssh: handshake failed: read tcp 192.168.39.1:36976->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.661001  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0210 12:45:37.662438  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0210 12:45:37.663469  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0210 12:45:37.664603  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0210 12:45:37.665673  588836 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0210 12:45:37.666602  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0210 12:45:37.666616  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0210 12:45:37.666632  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:37.669326  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.669697  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:37.669725  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:37.669852  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:37.670072  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:37.670239  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:37.670406  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	W0210 12:45:37.671091  588836 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36992->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:37.671117  588836 retry.go:31] will retry after 272.885562ms: ssh: handshake failed: read tcp 192.168.39.1:36992->192.168.39.213:22: read: connection reset by peer
	I0210 12:45:38.010842  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:45:38.033964  588836 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0210 12:45:38.034007  588836 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0210 12:45:38.045189  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:45:38.047000  588836 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:45:38.047026  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0210 12:45:38.071038  588836 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0210 12:45:38.071068  588836 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0210 12:45:38.079987  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0210 12:45:38.085116  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:45:38.090582  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:45:38.112112  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:45:38.149659  588836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 12:45:38.149685  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0210 12:45:38.164519  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:45:38.166178  588836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 12:45:38.166198  588836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:45:38.264705  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:45:38.287610  588836 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:45:38.287641  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0210 12:45:38.293453  588836 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0210 12:45:38.293473  588836 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0210 12:45:38.346760  588836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 12:45:38.346794  588836 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 12:45:38.454490  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:45:38.467833  588836 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0210 12:45:38.467879  588836 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0210 12:45:38.472919  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0210 12:45:38.472950  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0210 12:45:38.518279  588836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0210 12:45:38.518306  588836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0210 12:45:38.550370  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 12:45:38.598594  588836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:45:38.598629  588836 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 12:45:38.759313  588836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0210 12:45:38.759340  588836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0210 12:45:38.774257  588836 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:45:38.774286  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0210 12:45:38.798233  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0210 12:45:38.798262  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0210 12:45:38.812042  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:45:38.951189  588836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0210 12:45:38.951226  588836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0210 12:45:39.009661  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:45:39.042661  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0210 12:45:39.042702  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0210 12:45:39.427390  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0210 12:45:39.427420  588836 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0210 12:45:39.544299  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0210 12:45:39.544332  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0210 12:45:39.845062  588836 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:45:39.845091  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0210 12:45:39.918955  588836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0210 12:45:39.918988  588836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0210 12:45:40.000546  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.989647686s)
	I0210 12:45:40.000662  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:40.000683  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:40.001085  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:40.001107  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:40.001118  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:40.001126  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:40.001393  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:40.001443  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:40.001460  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:40.310012  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0210 12:45:40.310039  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0210 12:45:40.345034  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:45:40.493499  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0210 12:45:40.493533  588836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0210 12:45:40.790427  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0210 12:45:40.790464  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0210 12:45:41.034999  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.989762782s)
	I0210 12:45:41.035046  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.955023362s)
	I0210 12:45:41.035074  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:41.035088  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:41.035099  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:41.035112  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:41.035452  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:41.035471  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:41.035482  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:41.035489  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:41.035499  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:41.035515  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:41.035525  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:41.035533  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:41.035844  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:41.035871  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:41.035874  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:41.035891  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:41.035902  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:41.146775  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0210 12:45:41.146815  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0210 12:45:41.563331  588836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:45:41.563366  588836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0210 12:45:41.951717  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:45:43.057230  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.972062431s)
	I0210 12:45:43.057313  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.057326  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.057744  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.057763  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.057773  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.057781  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.058039  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:43.058075  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.058083  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.162421  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.162449  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.162773  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.162791  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.356832  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.266206846s)
	I0210 12:45:43.356855  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.244713949s)
	I0210 12:45:43.356912  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.356962  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.356991  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.357006  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.357302  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.357318  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.357328  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.357338  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.357669  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:43.357674  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.357681  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.357689  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.357670  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:43.357699  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:43.357696  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.357706  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:43.357890  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:43.357905  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:43.357908  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:44.418028  588836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0210 12:45:44.418073  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:44.421253  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:44.421753  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:44.421788  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:44.421994  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:44.422236  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:44.422429  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:44.422620  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:44.903380  588836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0210 12:45:45.004873  588836 addons.go:238] Setting addon gcp-auth=true in "addons-692802"
	I0210 12:45:45.004947  588836 host.go:66] Checking if "addons-692802" exists ...
	I0210 12:45:45.005260  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:45.005290  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:45.021082  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0210 12:45:45.021668  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:45.022302  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:45.022331  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:45.022694  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:45.023192  588836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:45:45.023220  588836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:45:45.039525  588836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0210 12:45:45.040018  588836 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:45:45.040628  588836 main.go:141] libmachine: Using API Version  1
	I0210 12:45:45.040658  588836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:45:45.041106  588836 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:45:45.041327  588836 main.go:141] libmachine: (addons-692802) Calling .GetState
	I0210 12:45:45.042723  588836 main.go:141] libmachine: (addons-692802) Calling .DriverName
	I0210 12:45:45.042945  588836 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0210 12:45:45.042970  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHHostname
	I0210 12:45:45.045801  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:45.046237  588836 main.go:141] libmachine: (addons-692802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:9a:c4", ip: ""} in network mk-addons-692802: {Iface:virbr1 ExpiryTime:2025-02-10 13:45:07 +0000 UTC Type:0 Mac:52:54:00:13:9a:c4 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:addons-692802 Clientid:01:52:54:00:13:9a:c4}
	I0210 12:45:45.046268  588836 main.go:141] libmachine: (addons-692802) DBG | domain addons-692802 has defined IP address 192.168.39.213 and MAC address 52:54:00:13:9a:c4 in network mk-addons-692802
	I0210 12:45:45.046413  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHPort
	I0210 12:45:45.046640  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHKeyPath
	I0210 12:45:45.046803  588836 main.go:141] libmachine: (addons-692802) Calling .GetSSHUsername
	I0210 12:45:45.046959  588836 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/addons-692802/id_rsa Username:docker}
	I0210 12:45:46.416359  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.251798246s)
	I0210 12:45:46.416434  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.416447  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.416448  588836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.250231072s)
	I0210 12:45:46.416484  588836 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0210 12:45:46.416532  588836 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.250308078s)
	I0210 12:45:46.416752  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.416766  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.416767  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.152031394s)
	I0210 12:45:46.416774  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.416782  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.416822  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.416840  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.416850  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.416861  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.962326884s)
	I0210 12:45:46.416897  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.416911  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.866483244s)
	I0210 12:45:46.416931  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.416946  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.416914  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.416993  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.604917451s)
	I0210 12:45:46.417009  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.417019  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417046  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.417100  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.407399029s)
	I0210 12:45:46.417118  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.417127  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417150  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417177  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417187  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.417194  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417256  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.072188326s)
	W0210 12:45:46.417302  588836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:45:46.417323  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.417328  588836 retry.go:31] will retry after 322.004427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:45:46.417362  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417372  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417380  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.417381  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417386  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417390  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417401  588836 addons.go:479] Verifying addon ingress=true in "addons-692802"
	I0210 12:45:46.417435  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.417574  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417590  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417599  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.417606  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417638  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.417667  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417677  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417680  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.417686  588836 addons.go:479] Verifying addon metrics-server=true in "addons-692802"
	I0210 12:45:46.417687  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.417706  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.418938  588836 out.go:177] * Verifying ingress addon...
	I0210 12:45:46.419391  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.419418  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.419427  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.419433  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.419439  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.419498  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.419518  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.419523  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.419696  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.419717  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.419723  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.419729  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.419736  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.417486  588836 node_ready.go:35] waiting up to 6m0s for node "addons-692802" to be "Ready" ...
	I0210 12:45:46.419822  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.419853  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.419862  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.419871  588836 addons.go:479] Verifying addon registry=true in "addons-692802"
	I0210 12:45:46.420026  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.420043  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.419979  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:46.421679  588836 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-692802 service yakd-dashboard -n yakd-dashboard
	
	I0210 12:45:46.421780  588836 out.go:177] * Verifying registry addon...
	I0210 12:45:46.423435  588836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0210 12:45:46.424397  588836 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0210 12:45:46.443885  588836 node_ready.go:49] node "addons-692802" has status "Ready":"True"
	I0210 12:45:46.443922  588836 node_ready.go:38] duration metric: took 24.17097ms for node "addons-692802" to be "Ready" ...
	I0210 12:45:46.443939  588836 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:45:46.461884  588836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0210 12:45:46.461919  588836 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0210 12:45:46.461947  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:46.461920  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:46.534961  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:46.534998  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:46.535333  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:46.535387  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:46.562821  588836 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:46.740374  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:45:46.920014  588836 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-692802" context rescaled to 1 replicas
	I0210 12:45:46.928404  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:46.928407  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:47.431502  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:47.431637  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:47.932731  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:47.932847  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:48.330344  588836 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.287373233s)
	I0210 12:45:48.330297  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.378523273s)
	I0210 12:45:48.330517  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:48.330531  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:48.330907  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:48.330962  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:48.330972  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:48.330980  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:48.330988  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:48.331244  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:48.331257  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:48.331276  588836 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-692802"
	I0210 12:45:48.332131  588836 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:45:48.332931  588836 out.go:177] * Verifying csi-hostpath-driver addon...
	I0210 12:45:48.334681  588836 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0210 12:45:48.335399  588836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0210 12:45:48.335744  588836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0210 12:45:48.335770  588836 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0210 12:45:48.343777  588836 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0210 12:45:48.343795  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:48.448219  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:48.463663  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:48.472415  588836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0210 12:45:48.472447  588836 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0210 12:45:48.580918  588836 pod_ready.go:103] pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace has status "Ready":"False"
	I0210 12:45:48.598641  588836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:45:48.598667  588836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0210 12:45:48.690319  588836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:45:48.840437  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:48.926894  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:48.928072  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:49.155379  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.414943854s)
	I0210 12:45:49.155447  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:49.155467  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:49.155806  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:49.155854  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:49.155853  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:49.155867  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:49.155877  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:49.156166  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:49.156182  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:49.339085  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:49.426804  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:49.427748  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:49.843000  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:49.946178  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:49.946518  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:50.111840  588836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.421472086s)
	I0210 12:45:50.111933  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:50.111960  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:50.112383  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:50.112404  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:50.112412  588836 main.go:141] libmachine: Making call to close driver server
	I0210 12:45:50.112418  588836 main.go:141] libmachine: (addons-692802) Calling .Close
	I0210 12:45:50.112463  588836 main.go:141] libmachine: (addons-692802) DBG | Closing plugin on server side
	I0210 12:45:50.112690  588836 main.go:141] libmachine: Successfully made call to close driver server
	I0210 12:45:50.112717  588836 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 12:45:50.114803  588836 addons.go:479] Verifying addon gcp-auth=true in "addons-692802"
	I0210 12:45:50.116364  588836 out.go:177] * Verifying gcp-auth addon...
	I0210 12:45:50.118594  588836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0210 12:45:50.173842  588836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0210 12:45:50.173866  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:50.339439  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:50.432625  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:50.432947  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:50.623515  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:50.842197  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:50.927279  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:50.927442  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:51.068826  588836 pod_ready.go:103] pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace has status "Ready":"False"
	I0210 12:45:51.121565  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:51.338975  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:51.428423  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:51.428531  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:51.621329  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:51.839697  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:51.926512  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:51.927091  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:52.124131  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:52.340518  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:52.427426  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:52.428019  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:52.622144  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:52.839249  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:52.926928  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:52.935152  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:53.354576  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:53.354629  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:53.427063  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:53.428210  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:53.568010  588836 pod_ready.go:103] pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace has status "Ready":"False"
	I0210 12:45:53.621728  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:53.839368  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:53.927580  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:53.927665  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:54.122173  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:54.339480  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:54.428431  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:54.428544  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:54.622415  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:54.852013  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:54.926716  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:54.929301  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:55.129452  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:55.338701  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:55.431147  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:55.431559  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:55.569619  588836 pod_ready.go:103] pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace has status "Ready":"False"
	I0210 12:45:55.622593  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:55.839158  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:55.926937  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:55.927671  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:56.121505  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:56.338631  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:56.426541  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:56.427538  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:56.621182  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:56.839195  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:56.927757  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:56.928779  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:57.122011  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:57.339528  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:57.428342  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:57.428669  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:57.622207  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:57.840979  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:57.940156  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:57.940781  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:58.068962  588836 pod_ready.go:93] pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.069011  588836 pod_ready.go:82] duration metric: took 11.506152047s for pod "amd-gpu-device-plugin-xm9dq" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.069030  588836 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sdttz" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.073084  588836 pod_ready.go:93] pod "coredns-668d6bf9bc-sdttz" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.073113  588836 pod_ready.go:82] duration metric: took 4.070669ms for pod "coredns-668d6bf9bc-sdttz" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.073123  588836 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z666g" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.074860  588836 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-z666g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-z666g" not found
	I0210 12:45:58.074882  588836 pod_ready.go:82] duration metric: took 1.752575ms for pod "coredns-668d6bf9bc-z666g" in "kube-system" namespace to be "Ready" ...
	E0210 12:45:58.074902  588836 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-z666g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-z666g" not found
	I0210 12:45:58.074910  588836 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.078699  588836 pod_ready.go:93] pod "etcd-addons-692802" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.078722  588836 pod_ready.go:82] duration metric: took 3.80525ms for pod "etcd-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.078734  588836 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.082651  588836 pod_ready.go:93] pod "kube-apiserver-addons-692802" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.082670  588836 pod_ready.go:82] duration metric: took 3.92946ms for pod "kube-apiserver-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.082679  588836 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.121754  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:58.267381  588836 pod_ready.go:93] pod "kube-controller-manager-addons-692802" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.267409  588836 pod_ready.go:82] duration metric: took 184.720464ms for pod "kube-controller-manager-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.267423  588836 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r5fh8" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.339526  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:58.426416  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:58.427626  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:58.621942  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:58.667849  588836 pod_ready.go:93] pod "kube-proxy-r5fh8" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:58.667882  588836 pod_ready.go:82] duration metric: took 400.452958ms for pod "kube-proxy-r5fh8" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.667895  588836 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:58.840326  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:58.990799  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:58.991143  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:59.067049  588836 pod_ready.go:93] pod "kube-scheduler-addons-692802" in "kube-system" namespace has status "Ready":"True"
	I0210 12:45:59.067083  588836 pod_ready.go:82] duration metric: took 399.180751ms for pod "kube-scheduler-addons-692802" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:59.067097  588836 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace to be "Ready" ...
	I0210 12:45:59.122000  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:59.338945  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:59.426863  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:59.427540  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:45:59.621271  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:45:59.843461  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:45:59.926680  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:45:59.928138  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:00.121313  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:00.339895  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:00.426797  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:00.427922  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:00.621438  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:00.839585  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:00.927226  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:00.928332  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:01.073676  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:01.121967  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:01.340561  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:01.428602  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:01.428832  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:01.621242  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:01.839277  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:01.927506  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:01.928184  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:02.122689  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:02.339151  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:02.426850  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:02.427452  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:02.621447  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:02.838929  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:02.926503  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:02.928299  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:03.122468  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:03.338782  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:03.427488  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:03.428316  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:03.572349  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:03.621816  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:03.857477  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:03.926980  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:03.929251  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:04.122820  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:04.339113  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:04.426841  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:04.430214  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:04.621231  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:04.839588  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:04.927507  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:04.927957  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:05.122202  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:05.339431  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:05.427690  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:05.427782  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:05.573164  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:05.621735  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:05.838919  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:05.939217  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:05.939296  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:06.121777  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:06.338961  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:06.437564  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:06.437863  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:06.623962  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:06.840154  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:06.927985  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:06.928713  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:07.122373  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:07.338467  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:07.428263  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:07.428437  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:07.625198  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:08.187915  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:08.188011  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:08.188024  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:08.192356  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:08.193961  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:08.339273  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:08.427204  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:08.427426  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:08.621320  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:08.839282  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:08.927278  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:08.928292  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:09.122516  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:09.338981  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:09.426757  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:09.428497  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:09.621499  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:09.838918  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:09.926987  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:09.928419  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:10.122506  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:10.550734  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:10.550930  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:10.551193  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:10.572018  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:10.622267  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:10.839367  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:10.927503  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:10.927513  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:11.122579  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:11.339087  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:11.426879  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:11.428104  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:11.622105  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:11.839025  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:11.927046  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:11.927458  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:12.124783  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:12.339409  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:12.426913  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:12.427254  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:12.674041  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:12.677366  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:12.839764  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:12.940556  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:12.940812  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:13.121897  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:13.340832  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:13.429042  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:13.429309  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:13.622154  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:13.839244  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:13.927438  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:13.927616  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:14.121582  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:14.338702  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:14.426489  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:14.427727  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:14.621441  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:14.839182  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:14.927011  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:14.927928  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:15.072778  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:15.121389  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:15.339208  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:15.427963  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:15.428618  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:15.621816  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:15.839862  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:15.926411  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:15.928050  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:16.122350  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:16.338420  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:16.427831  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:16.427836  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:16.621134  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:16.839691  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:16.927329  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:16.928527  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:17.121852  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:17.339589  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:17.426503  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:17.427362  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:17.572846  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:17.621545  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:17.838814  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:17.926195  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:17.928584  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:18.122403  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:18.339970  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:18.427546  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:18.427782  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:18.622085  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:18.840391  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:18.928917  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:18.929866  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:19.122363  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:19.341331  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:19.428131  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:19.428249  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:19.622166  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:19.840082  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:19.928054  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:19.928596  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:20.073528  588836 pod_ready.go:103] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"False"
	I0210 12:46:20.122941  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:20.340059  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:20.428322  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:20.428359  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:21.012885  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:21.012881  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:21.013444  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:21.013513  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:21.122521  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:21.339369  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:21.428869  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:21.428906  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:21.621726  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:21.839205  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:21.927688  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:21.928588  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:22.122345  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:22.339393  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:22.427149  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:22.427332  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:22.575543  588836 pod_ready.go:93] pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace has status "Ready":"True"
	I0210 12:46:22.575576  588836 pod_ready.go:82] duration metric: took 23.508470289s for pod "metrics-server-7fbb699795-kjjv7" in "kube-system" namespace to be "Ready" ...
	I0210 12:46:22.575592  588836 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8gl6m" in "kube-system" namespace to be "Ready" ...
	I0210 12:46:22.586426  588836 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-8gl6m" in "kube-system" namespace has status "Ready":"True"
	I0210 12:46:22.586450  588836 pod_ready.go:82] duration metric: took 10.851194ms for pod "nvidia-device-plugin-daemonset-8gl6m" in "kube-system" namespace to be "Ready" ...
	I0210 12:46:22.586470  588836 pod_ready.go:39] duration metric: took 36.142514383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:46:22.586502  588836 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:46:22.586569  588836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:46:22.623581  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:22.623881  588836 api_server.go:72] duration metric: took 45.14741175s to wait for apiserver process to appear ...
	I0210 12:46:22.623909  588836 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:46:22.623935  588836 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0210 12:46:22.631649  588836 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0210 12:46:22.632651  588836 api_server.go:141] control plane version: v1.32.1
	I0210 12:46:22.632674  588836 api_server.go:131] duration metric: took 8.758999ms to wait for apiserver health ...
	I0210 12:46:22.632683  588836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:46:22.637810  588836 system_pods.go:59] 18 kube-system pods found
	I0210 12:46:22.637844  588836 system_pods.go:61] "amd-gpu-device-plugin-xm9dq" [a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5] Running
	I0210 12:46:22.637853  588836 system_pods.go:61] "coredns-668d6bf9bc-sdttz" [d8a6d04a-c10a-4320-b224-4568f3ec83b5] Running
	I0210 12:46:22.637865  588836 system_pods.go:61] "csi-hostpath-attacher-0" [55f71143-7a59-4a9b-8744-1e671c036989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 12:46:22.637875  588836 system_pods.go:61] "csi-hostpath-resizer-0" [abdc4146-28ad-422c-99ff-f2360ade4695] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 12:46:22.637890  588836 system_pods.go:61] "csi-hostpathplugin-6rxln" [b9bd5d7e-549e-4ec2-9e88-e8d9617b4842] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:46:22.637896  588836 system_pods.go:61] "etcd-addons-692802" [58d6a44f-f853-4d91-ba3b-c1249fecc2f5] Running
	I0210 12:46:22.637907  588836 system_pods.go:61] "kube-apiserver-addons-692802" [4f06512f-e9c1-44bc-90a3-3d96bfcdfe85] Running
	I0210 12:46:22.637912  588836 system_pods.go:61] "kube-controller-manager-addons-692802" [e51a6c8e-0d15-4147-9bef-7a5d86624761] Running
	I0210 12:46:22.637921  588836 system_pods.go:61] "kube-ingress-dns-minikube" [6d57ec97-a428-441c-973a-8c44196194ce] Running
	I0210 12:46:22.637926  588836 system_pods.go:61] "kube-proxy-r5fh8" [f4d84ccb-9896-415b-8c48-17f550349ac0] Running
	I0210 12:46:22.637931  588836 system_pods.go:61] "kube-scheduler-addons-692802" [35cd1c1b-f208-49c7-8fb7-8a79e0d117bd] Running
	I0210 12:46:22.637936  588836 system_pods.go:61] "metrics-server-7fbb699795-kjjv7" [66f5e532-302a-4f4a-b0b4-875231c972a3] Running
	I0210 12:46:22.637943  588836 system_pods.go:61] "nvidia-device-plugin-daemonset-8gl6m" [903ecc9f-03ab-4ced-b872-b46377fa27ab] Running
	I0210 12:46:22.637950  588836 system_pods.go:61] "registry-6c88467877-bf2d9" [aa3f3518-d768-442e-8f70-86cabb491756] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0210 12:46:22.637963  588836 system_pods.go:61] "registry-proxy-mkjph" [aa21886d-f389-45d4-ac25-ac8cb798cf7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:46:22.637973  588836 system_pods.go:61] "snapshot-controller-68b874b76f-kjvp2" [0f555d79-2125-44da-a9c1-83df4b12a875] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:46:22.637985  588836 system_pods.go:61] "snapshot-controller-68b874b76f-rxr8s" [8bde7b68-9847-4697-82f5-91743733dd41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:46:22.637993  588836 system_pods.go:61] "storage-provisioner" [9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97] Running
	I0210 12:46:22.638006  588836 system_pods.go:74] duration metric: took 5.318318ms to wait for pod list to return data ...
	I0210 12:46:22.638019  588836 default_sa.go:34] waiting for default service account to be created ...
	I0210 12:46:22.641104  588836 default_sa.go:45] found service account: "default"
	I0210 12:46:22.641126  588836 default_sa.go:55] duration metric: took 3.093998ms for default service account to be created ...
	I0210 12:46:22.641137  588836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 12:46:22.651389  588836 system_pods.go:86] 18 kube-system pods found
	I0210 12:46:22.651418  588836 system_pods.go:89] "amd-gpu-device-plugin-xm9dq" [a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5] Running
	I0210 12:46:22.651427  588836 system_pods.go:89] "coredns-668d6bf9bc-sdttz" [d8a6d04a-c10a-4320-b224-4568f3ec83b5] Running
	I0210 12:46:22.651437  588836 system_pods.go:89] "csi-hostpath-attacher-0" [55f71143-7a59-4a9b-8744-1e671c036989] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0210 12:46:22.651446  588836 system_pods.go:89] "csi-hostpath-resizer-0" [abdc4146-28ad-422c-99ff-f2360ade4695] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0210 12:46:22.651457  588836 system_pods.go:89] "csi-hostpathplugin-6rxln" [b9bd5d7e-549e-4ec2-9e88-e8d9617b4842] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:46:22.651463  588836 system_pods.go:89] "etcd-addons-692802" [58d6a44f-f853-4d91-ba3b-c1249fecc2f5] Running
	I0210 12:46:22.651470  588836 system_pods.go:89] "kube-apiserver-addons-692802" [4f06512f-e9c1-44bc-90a3-3d96bfcdfe85] Running
	I0210 12:46:22.651476  588836 system_pods.go:89] "kube-controller-manager-addons-692802" [e51a6c8e-0d15-4147-9bef-7a5d86624761] Running
	I0210 12:46:22.651482  588836 system_pods.go:89] "kube-ingress-dns-minikube" [6d57ec97-a428-441c-973a-8c44196194ce] Running
	I0210 12:46:22.651487  588836 system_pods.go:89] "kube-proxy-r5fh8" [f4d84ccb-9896-415b-8c48-17f550349ac0] Running
	I0210 12:46:22.651492  588836 system_pods.go:89] "kube-scheduler-addons-692802" [35cd1c1b-f208-49c7-8fb7-8a79e0d117bd] Running
	I0210 12:46:22.651497  588836 system_pods.go:89] "metrics-server-7fbb699795-kjjv7" [66f5e532-302a-4f4a-b0b4-875231c972a3] Running
	I0210 12:46:22.651502  588836 system_pods.go:89] "nvidia-device-plugin-daemonset-8gl6m" [903ecc9f-03ab-4ced-b872-b46377fa27ab] Running
	I0210 12:46:22.651508  588836 system_pods.go:89] "registry-6c88467877-bf2d9" [aa3f3518-d768-442e-8f70-86cabb491756] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0210 12:46:22.651515  588836 system_pods.go:89] "registry-proxy-mkjph" [aa21886d-f389-45d4-ac25-ac8cb798cf7d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:46:22.651523  588836 system_pods.go:89] "snapshot-controller-68b874b76f-kjvp2" [0f555d79-2125-44da-a9c1-83df4b12a875] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:46:22.651531  588836 system_pods.go:89] "snapshot-controller-68b874b76f-rxr8s" [8bde7b68-9847-4697-82f5-91743733dd41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0210 12:46:22.651537  588836 system_pods.go:89] "storage-provisioner" [9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97] Running
	I0210 12:46:22.651546  588836 system_pods.go:126] duration metric: took 10.40206ms to wait for k8s-apps to be running ...
	I0210 12:46:22.651556  588836 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:46:22.651620  588836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:46:22.667375  588836 system_svc.go:56] duration metric: took 15.807275ms WaitForService to wait for kubelet
	I0210 12:46:22.667405  588836 kubeadm.go:582] duration metric: took 45.190942483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:46:22.667426  588836 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:46:22.671375  588836 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:46:22.671403  588836 node_conditions.go:123] node cpu capacity is 2
	I0210 12:46:22.671421  588836 node_conditions.go:105] duration metric: took 3.98938ms to run NodePressure ...
	I0210 12:46:22.671437  588836 start.go:241] waiting for startup goroutines ...
	I0210 12:46:22.838556  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:22.926327  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:22.927411  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:23.122628  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:23.339011  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:23.426683  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:23.428463  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:23.622260  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:23.840343  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:23.927715  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:23.928749  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:24.121750  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:24.339696  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:24.440574  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:24.440590  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:24.621662  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:24.838847  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:24.926608  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:24.927968  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:25.122788  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:25.340058  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:25.429101  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:25.429337  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:25.622202  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:25.839583  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:25.926350  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:25.928119  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:26.122527  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:26.338964  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:26.427855  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:26.428157  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:26.622003  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:26.838519  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:26.927837  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:26.928422  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:27.122777  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:27.339019  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:27.426940  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:27.428087  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:27.621836  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:27.839652  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:27.926221  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:27.927297  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:28.124453  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:28.338757  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:28.427847  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:28.428077  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:28.625758  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:28.840340  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:28.927023  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:28.927601  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:29.121636  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:29.338467  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:29.427898  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:29.427930  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:29.622327  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:29.839968  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:29.926607  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:29.927765  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:30.123180  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:30.339431  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:30.428068  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:30.428413  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:30.622200  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:30.839636  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:30.927768  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:30.928157  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:31.122425  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:31.340087  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:31.427339  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:31.428457  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:31.622812  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:31.838994  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:31.927056  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:31.927883  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:32.121775  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:32.339782  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:32.426885  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:46:32.427679  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:32.622705  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:32.839647  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:32.926883  588836 kapi.go:107] duration metric: took 46.503445508s to wait for kubernetes.io/minikube-addons=registry ...
	I0210 12:46:32.929116  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:33.122591  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:33.339572  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:33.427979  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:33.621862  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:33.839908  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:33.927623  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:34.122690  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:34.339321  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:34.429119  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:34.621908  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:34.839179  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:34.928189  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:35.367298  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:35.367938  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:35.427611  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:35.621187  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:35.839216  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:35.927923  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:36.122102  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:36.339395  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:36.427927  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:36.621803  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:36.838854  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:36.928250  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:37.122631  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:37.338650  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:37.427743  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:37.621326  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:37.838274  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:37.929002  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:38.121870  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:38.339054  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:38.428168  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:38.622012  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:38.839280  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:38.928651  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:39.121663  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:39.339073  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:39.428311  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:39.622306  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:39.839238  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:39.928613  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:40.121537  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:40.339909  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:40.427872  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:40.621669  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:40.839956  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:40.928102  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:41.121844  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:41.340140  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:41.440826  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:41.621931  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:41.838712  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:41.927493  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:42.122793  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:42.339180  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:42.427859  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:42.622273  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:42.838660  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:42.927663  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:43.121811  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:43.338902  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:43.811711  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:43.812013  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:43.838794  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:43.928079  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:44.122861  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:44.339372  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:44.428044  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:44.621805  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:44.839546  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:44.928018  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:45.121887  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:45.339761  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:45.427616  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:45.622073  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:45.983275  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:45.983282  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:46.122965  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:46.339990  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:46.428309  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:46.622185  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:46.839776  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:46.927743  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:47.121511  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:47.338727  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:47.427421  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:47.621866  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:47.839146  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:47.928242  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:48.123131  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:48.340341  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:48.441598  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:48.622107  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:48.843290  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:48.928406  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:49.124338  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:49.338793  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:49.427958  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:49.622231  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:49.838956  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:50.272063  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:50.272458  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:50.345672  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:50.443822  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:50.622271  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:50.840169  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:50.928256  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:51.122294  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:51.340950  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:51.440986  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:51.621511  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:51.839957  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:51.940225  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:52.122281  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:52.338805  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:52.427598  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:52.621845  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:52.840024  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:52.927750  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:53.121479  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:53.338988  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:53.428136  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:53.624716  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:53.839524  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:53.928231  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:54.122001  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:54.339564  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:54.428747  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:54.621320  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:54.838645  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:54.927434  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:55.122029  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:55.339296  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:55.428118  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:55.623616  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:55.845345  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:55.938192  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:56.122267  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:56.536774  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:56.537966  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:56.621835  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:56.839146  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:56.928522  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:57.123857  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:57.339226  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:57.428072  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:57.621745  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:57.840757  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:57.928488  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:58.129319  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:58.341344  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:58.429733  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:58.622150  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:58.840441  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:58.928072  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:59.121615  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:59.339175  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:59.428145  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:46:59.622144  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:46:59.839966  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:46:59.927976  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:47:00.121777  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:00.338815  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:00.427788  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:47:00.621613  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:00.839329  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:00.940262  588836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:47:01.122346  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:01.344143  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:01.430020  588836 kapi.go:107] duration metric: took 1m15.005617958s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0210 12:47:01.622305  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:01.840133  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:02.204665  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:02.338754  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:02.622604  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:02.839312  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:03.122303  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:03.338802  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:03.621456  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:03.838538  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:04.122299  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:04.339813  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:04.624417  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:05.077569  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:05.178046  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:47:05.341739  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:05.630639  588836 kapi.go:107] duration metric: took 1m15.512042799s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0210 12:47:05.631896  588836 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-692802 cluster.
	I0210 12:47:05.633232  588836 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0210 12:47:05.634520  588836 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0210 12:47:05.841396  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:06.339234  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:06.840172  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:07.338473  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:07.844845  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:08.339730  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:08.839506  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:09.341348  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:09.838857  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:10.339654  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:10.839790  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:11.390510  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:11.839895  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:12.340243  588836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:47:12.839173  588836 kapi.go:107] duration metric: took 1m24.503774942s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0210 12:47:12.840942  588836 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, storage-provisioner, amd-gpu-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0210 12:47:12.842051  588836 addons.go:514] duration metric: took 1m35.365546801s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner-rancher storage-provisioner amd-gpu-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0210 12:47:12.842125  588836 start.go:246] waiting for cluster config update ...
	I0210 12:47:12.842151  588836 start.go:255] writing updated cluster config ...
	I0210 12:47:12.842433  588836 ssh_runner.go:195] Run: rm -f paused
	I0210 12:47:12.896151  588836 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 12:47:12.898051  588836 out.go:177] * Done! kubectl is now configured to use "addons-692802" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.321692321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191814321666598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=831ba181-591a-4900-bd90-f8549f6a42e9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.322245732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c34c97a0-b5a4-48d6-8a74-64f6bc12ed29 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.322298283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c34c97a0-b5a4-48d6-8a74-64f6bc12ed29 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.322624032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:508e14e69341f38208b6aff77b8cf45756b79a0feb3030d33628d0afdfe5b4ef,PodSandboxId:927aeda1ce40b3c4a3117b1bdf5049c3c82eacfc6ad29bf8789cda8bb39a1d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739191675358791005,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b150f03f-0763-48b1-a6dd-5456e6ab3976,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e4a5e3037eaa5d33fc7b8603380965ae31682419ea6357bc238356f506676,PodSandboxId:48a5f9544462c1486e048e7bab562b473b646bc79be8c4d61cee2159310fed65,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739191636837154063,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183f426-b329-49f1-9759-014bcd2a9b34,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b1e6657eec909af39c5c68ce222a34fc2abc6f7109bb1896799125bea2bbe,PodSandboxId:c9c50bb8b467366bb53eefb022835ece3d04481172acebac626244e7d26f4560,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739191620632560816,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-jlz7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38c5c7b3-61f0-4334-b6e3-54bd37be8466,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07765ac7990d246d8b2881007e635c2357ab0a633ff0fab9f8f7df8c6dcb3566,PodSandboxId:3f0d62c0c84cbdf01eb5990bdce103e39143ced340cefcafff686dd9255ad685,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739191608845524660,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xhq7r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d80e8ea-64bd-4c21-8bce-fd39e3c3741c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3122c82344f5190cddfa8e0935c8ae40d90cfaa57e977fbcfedc14ade19c80b1,PodSandboxId:43a017951837b01cb9fa48f8aa899d45055f6581e529babd46d22d120d96f644,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739191608284964452,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v4dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60e54278-1f41-4554-b1c3-cd9ca5dfab3e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af60883b24c4b2d6873ccab321e2b2aa19af30930d8bf9e11b7789efa6969dd,PodSandboxId:40e9393404c41ec2a051ff3e23019f4211d338e804fa39fd420ff07a7f1f1f3d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739191557429873418,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xm9dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a58dffea01186ef20e73182d15f86b43ffe320487baa67501ef6501d2c3066,PodSandboxId:27aa006e27b905f98c58c057e812f824cbd09ab63602c5655f4fd077a813c472,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739191554523363066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d57ec97-a428-441c-973a-8c44196194ce,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cfbe8f2a95e301a377ca360b4d95d732d5e028b850ec1c4ff5fdd77df19fad,PodSandboxId:df0ddb4ab451ccf8d6f889775d196356b0754804aae69e2f90c3b877e9d101bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739191544320960805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebdeb9696512ef576b3bf3ba968ffcac78eb0bb7b8cf624c5d2dea3e05aac93,PodSandboxId:54f01d8f95c3cc18a6e99f71531b468ba562ab20b1a6cd196164697a4a97a0ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739191540976569097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdttz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a6d04a-c10a-4320-b224-4568f3ec83b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8cc866cfea619a5dfe9dc22cb0fb18789fcf54b67cc815a3bf4
958ad47ffbd,PodSandboxId:a354629f84f2ab04208eaac588da263319256eaef3c54d874f01bc0d8e79601c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739191538567954025,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r5fh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d84ccb-9896-415b-8c48-17f550349ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0858ed2c3fb989debc0d0e2987ea9773c8710d110fc8576be3a26ba97c54d6d,PodSandboxId:9033dbf3
f482b0d83ba2729471830ee1a8ccacb8fb7b559ed1d5369c68e997dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739191527292709058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe5c3bbf64c0d44e86b68ce3e723649,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c235ded61f125a9dca1ad1c07ba75db986b100d562786a247115e41ae28c089,PodSandboxId
:1c38113aac223a1df3b289cbf6eef03772a6c6281a2e14ee0e140ec69d682f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739191527267143875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a34eb6c7537bc9ce0945c871faee69,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce99d2670f91c928b1f11bbce1f8249634d0c240e2ef1e43c2b2eb3d52705d5,PodSandboxId:aace3b149399dd70
a17814628a04c67e5f99e677e73c2d4f3ac6990997e61e04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739191527210679757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b587573fe0fe71e1798a5ffb5ad68e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14b3dd9b4d1012bf02bed8abd7cc5533da46311690b84d321a4cb95f8c699fd,PodSandboxId:dff2045713aa2f6912ba0c42666102a4d796f96bab4af255b58221fd2ec4f2a
8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739191527161351667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09005bcc2991e347e720bffc6fd78694,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c34c97a0-b5a4-48d6-8a74-64f6bc12ed29 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.360240649Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1733b251-c17e-48aa-83f5-73480d8a3af8 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.360320218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1733b251-c17e-48aa-83f5-73480d8a3af8 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.361693219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc1cb12c-8c6e-411b-a869-6602d5bca051 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.363041825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191814363015478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc1cb12c-8c6e-411b-a869-6602d5bca051 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.363752641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc4cf5b1-8d87-4636-bc4c-78a9d9edb87f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.363805006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc4cf5b1-8d87-4636-bc4c-78a9d9edb87f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.364093214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:508e14e69341f38208b6aff77b8cf45756b79a0feb3030d33628d0afdfe5b4ef,PodSandboxId:927aeda1ce40b3c4a3117b1bdf5049c3c82eacfc6ad29bf8789cda8bb39a1d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739191675358791005,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b150f03f-0763-48b1-a6dd-5456e6ab3976,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e4a5e3037eaa5d33fc7b8603380965ae31682419ea6357bc238356f506676,PodSandboxId:48a5f9544462c1486e048e7bab562b473b646bc79be8c4d61cee2159310fed65,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739191636837154063,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183f426-b329-49f1-9759-014bcd2a9b34,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b1e6657eec909af39c5c68ce222a34fc2abc6f7109bb1896799125bea2bbe,PodSandboxId:c9c50bb8b467366bb53eefb022835ece3d04481172acebac626244e7d26f4560,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739191620632560816,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-jlz7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38c5c7b3-61f0-4334-b6e3-54bd37be8466,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07765ac7990d246d8b2881007e635c2357ab0a633ff0fab9f8f7df8c6dcb3566,PodSandboxId:3f0d62c0c84cbdf01eb5990bdce103e39143ced340cefcafff686dd9255ad685,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739191608845524660,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xhq7r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d80e8ea-64bd-4c21-8bce-fd39e3c3741c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3122c82344f5190cddfa8e0935c8ae40d90cfaa57e977fbcfedc14ade19c80b1,PodSandboxId:43a017951837b01cb9fa48f8aa899d45055f6581e529babd46d22d120d96f644,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739191608284964452,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v4dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60e54278-1f41-4554-b1c3-cd9ca5dfab3e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af60883b24c4b2d6873ccab321e2b2aa19af30930d8bf9e11b7789efa6969dd,PodSandboxId:40e9393404c41ec2a051ff3e23019f4211d338e804fa39fd420ff07a7f1f1f3d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739191557429873418,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xm9dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a58dffea01186ef20e73182d15f86b43ffe320487baa67501ef6501d2c3066,PodSandboxId:27aa006e27b905f98c58c057e812f824cbd09ab63602c5655f4fd077a813c472,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739191554523363066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d57ec97-a428-441c-973a-8c44196194ce,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cfbe8f2a95e301a377ca360b4d95d732d5e028b850ec1c4ff5fdd77df19fad,PodSandboxId:df0ddb4ab451ccf8d6f889775d196356b0754804aae69e2f90c3b877e9d101bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739191544320960805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebdeb9696512ef576b3bf3ba968ffcac78eb0bb7b8cf624c5d2dea3e05aac93,PodSandboxId:54f01d8f95c3cc18a6e99f71531b468ba562ab20b1a6cd196164697a4a97a0ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739191540976569097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdttz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a6d04a-c10a-4320-b224-4568f3ec83b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8cc866cfea619a5dfe9dc22cb0fb18789fcf54b67cc815a3bf4
958ad47ffbd,PodSandboxId:a354629f84f2ab04208eaac588da263319256eaef3c54d874f01bc0d8e79601c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739191538567954025,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r5fh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d84ccb-9896-415b-8c48-17f550349ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0858ed2c3fb989debc0d0e2987ea9773c8710d110fc8576be3a26ba97c54d6d,PodSandboxId:9033dbf3
f482b0d83ba2729471830ee1a8ccacb8fb7b559ed1d5369c68e997dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739191527292709058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe5c3bbf64c0d44e86b68ce3e723649,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c235ded61f125a9dca1ad1c07ba75db986b100d562786a247115e41ae28c089,PodSandboxId
:1c38113aac223a1df3b289cbf6eef03772a6c6281a2e14ee0e140ec69d682f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739191527267143875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a34eb6c7537bc9ce0945c871faee69,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce99d2670f91c928b1f11bbce1f8249634d0c240e2ef1e43c2b2eb3d52705d5,PodSandboxId:aace3b149399dd70
a17814628a04c67e5f99e677e73c2d4f3ac6990997e61e04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739191527210679757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b587573fe0fe71e1798a5ffb5ad68e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14b3dd9b4d1012bf02bed8abd7cc5533da46311690b84d321a4cb95f8c699fd,PodSandboxId:dff2045713aa2f6912ba0c42666102a4d796f96bab4af255b58221fd2ec4f2a
8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739191527161351667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09005bcc2991e347e720bffc6fd78694,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc4cf5b1-8d87-4636-bc4c-78a9d9edb87f name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.399903005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9057ef55-d321-417e-83ed-57a36e4e8962 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.399987264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9057ef55-d321-417e-83ed-57a36e4e8962 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.401397674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28022151-0596-4e8b-91f3-62bd2f8683d7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.402586146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191814402557366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28022151-0596-4e8b-91f3-62bd2f8683d7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.403226538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07ac3679-dbb5-4b55-ad4d-19ac3cfeba98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.403283444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07ac3679-dbb5-4b55-ad4d-19ac3cfeba98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.403580255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:508e14e69341f38208b6aff77b8cf45756b79a0feb3030d33628d0afdfe5b4ef,PodSandboxId:927aeda1ce40b3c4a3117b1bdf5049c3c82eacfc6ad29bf8789cda8bb39a1d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739191675358791005,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b150f03f-0763-48b1-a6dd-5456e6ab3976,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e4a5e3037eaa5d33fc7b8603380965ae31682419ea6357bc238356f506676,PodSandboxId:48a5f9544462c1486e048e7bab562b473b646bc79be8c4d61cee2159310fed65,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739191636837154063,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183f426-b329-49f1-9759-014bcd2a9b34,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b1e6657eec909af39c5c68ce222a34fc2abc6f7109bb1896799125bea2bbe,PodSandboxId:c9c50bb8b467366bb53eefb022835ece3d04481172acebac626244e7d26f4560,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739191620632560816,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-jlz7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38c5c7b3-61f0-4334-b6e3-54bd37be8466,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07765ac7990d246d8b2881007e635c2357ab0a633ff0fab9f8f7df8c6dcb3566,PodSandboxId:3f0d62c0c84cbdf01eb5990bdce103e39143ced340cefcafff686dd9255ad685,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739191608845524660,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xhq7r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d80e8ea-64bd-4c21-8bce-fd39e3c3741c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3122c82344f5190cddfa8e0935c8ae40d90cfaa57e977fbcfedc14ade19c80b1,PodSandboxId:43a017951837b01cb9fa48f8aa899d45055f6581e529babd46d22d120d96f644,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739191608284964452,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v4dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60e54278-1f41-4554-b1c3-cd9ca5dfab3e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af60883b24c4b2d6873ccab321e2b2aa19af30930d8bf9e11b7789efa6969dd,PodSandboxId:40e9393404c41ec2a051ff3e23019f4211d338e804fa39fd420ff07a7f1f1f3d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739191557429873418,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xm9dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a58dffea01186ef20e73182d15f86b43ffe320487baa67501ef6501d2c3066,PodSandboxId:27aa006e27b905f98c58c057e812f824cbd09ab63602c5655f4fd077a813c472,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739191554523363066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d57ec97-a428-441c-973a-8c44196194ce,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cfbe8f2a95e301a377ca360b4d95d732d5e028b850ec1c4ff5fdd77df19fad,PodSandboxId:df0ddb4ab451ccf8d6f889775d196356b0754804aae69e2f90c3b877e9d101bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739191544320960805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebdeb9696512ef576b3bf3ba968ffcac78eb0bb7b8cf624c5d2dea3e05aac93,PodSandboxId:54f01d8f95c3cc18a6e99f71531b468ba562ab20b1a6cd196164697a4a97a0ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739191540976569097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdttz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a6d04a-c10a-4320-b224-4568f3ec83b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8cc866cfea619a5dfe9dc22cb0fb18789fcf54b67cc815a3bf4
958ad47ffbd,PodSandboxId:a354629f84f2ab04208eaac588da263319256eaef3c54d874f01bc0d8e79601c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739191538567954025,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r5fh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d84ccb-9896-415b-8c48-17f550349ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0858ed2c3fb989debc0d0e2987ea9773c8710d110fc8576be3a26ba97c54d6d,PodSandboxId:9033dbf3
f482b0d83ba2729471830ee1a8ccacb8fb7b559ed1d5369c68e997dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739191527292709058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe5c3bbf64c0d44e86b68ce3e723649,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c235ded61f125a9dca1ad1c07ba75db986b100d562786a247115e41ae28c089,PodSandboxId
:1c38113aac223a1df3b289cbf6eef03772a6c6281a2e14ee0e140ec69d682f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739191527267143875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a34eb6c7537bc9ce0945c871faee69,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce99d2670f91c928b1f11bbce1f8249634d0c240e2ef1e43c2b2eb3d52705d5,PodSandboxId:aace3b149399dd70
a17814628a04c67e5f99e677e73c2d4f3ac6990997e61e04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739191527210679757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b587573fe0fe71e1798a5ffb5ad68e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14b3dd9b4d1012bf02bed8abd7cc5533da46311690b84d321a4cb95f8c699fd,PodSandboxId:dff2045713aa2f6912ba0c42666102a4d796f96bab4af255b58221fd2ec4f2a
8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739191527161351667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09005bcc2991e347e720bffc6fd78694,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07ac3679-dbb5-4b55-ad4d-19ac3cfeba98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.447276379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a33a83f8-733e-416e-ae3d-5d755ff857d4 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.447351445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a33a83f8-733e-416e-ae3d-5d755ff857d4 name=/runtime.v1.RuntimeService/Version
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.448579407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ddd8055-10ac-4f4b-ad3d-0d58391c7233 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.449690600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191814449666970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ddd8055-10ac-4f4b-ad3d-0d58391c7233 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.450494273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=289e9f16-5cd1-4903-b0d6-f2474d1bea53 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.450598644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=289e9f16-5cd1-4903-b0d6-f2474d1bea53 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 12:50:14 addons-692802 crio[660]: time="2025-02-10 12:50:14.450931293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:508e14e69341f38208b6aff77b8cf45756b79a0feb3030d33628d0afdfe5b4ef,PodSandboxId:927aeda1ce40b3c4a3117b1bdf5049c3c82eacfc6ad29bf8789cda8bb39a1d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739191675358791005,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b150f03f-0763-48b1-a6dd-5456e6ab3976,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63e4a5e3037eaa5d33fc7b8603380965ae31682419ea6357bc238356f506676,PodSandboxId:48a5f9544462c1486e048e7bab562b473b646bc79be8c4d61cee2159310fed65,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739191636837154063,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183f426-b329-49f1-9759-014bcd2a9b34,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:592b1e6657eec909af39c5c68ce222a34fc2abc6f7109bb1896799125bea2bbe,PodSandboxId:c9c50bb8b467366bb53eefb022835ece3d04481172acebac626244e7d26f4560,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739191620632560816,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-jlz7l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38c5c7b3-61f0-4334-b6e3-54bd37be8466,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:07765ac7990d246d8b2881007e635c2357ab0a633ff0fab9f8f7df8c6dcb3566,PodSandboxId:3f0d62c0c84cbdf01eb5990bdce103e39143ced340cefcafff686dd9255ad685,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1739191608845524660,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xhq7r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d80e8ea-64bd-4c21-8bce-fd39e3c3741c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3122c82344f5190cddfa8e0935c8ae40d90cfaa57e977fbcfedc14ade19c80b1,PodSandboxId:43a017951837b01cb9fa48f8aa899d45055f6581e529babd46d22d120d96f644,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739191608284964452,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4v4dq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60e54278-1f41-4554-b1c3-cd9ca5dfab3e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af60883b24c4b2d6873ccab321e2b2aa19af30930d8bf9e11b7789efa6969dd,PodSandboxId:40e9393404c41ec2a051ff3e23019f4211d338e804fa39fd420ff07a7f1f1f3d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739191557429873418,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xm9dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ec1b7-1187-4b0c-8e9d-a736a1ea5cd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a58dffea01186ef20e73182d15f86b43ffe320487baa67501ef6501d2c3066,PodSandboxId:27aa006e27b905f98c58c057e812f824cbd09ab63602c5655f4fd077a813c472,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739191554523363066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d57ec97-a428-441c-973a-8c44196194ce,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8cfbe8f2a95e301a377ca360b4d95d732d5e028b850ec1c4ff5fdd77df19fad,PodSandboxId:df0ddb4ab451ccf8d6f889775d196356b0754804aae69e2f90c3b877e9d101bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739191544320960805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b2b36b6-3fb5-49a6-9b1d-e54fe2c69c97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ebdeb9696512ef576b3bf3ba968ffcac78eb0bb7b8cf624c5d2dea3e05aac93,PodSandboxId:54f01d8f95c3cc18a6e99f71531b468ba562ab20b1a6cd196164697a4a97a0ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739191540976569097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sdttz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a6d04a-c10a-4320-b224-4568f3ec83b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8cc866cfea619a5dfe9dc22cb0fb18789fcf54b67cc815a3bf4
958ad47ffbd,PodSandboxId:a354629f84f2ab04208eaac588da263319256eaef3c54d874f01bc0d8e79601c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739191538567954025,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r5fh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d84ccb-9896-415b-8c48-17f550349ac0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0858ed2c3fb989debc0d0e2987ea9773c8710d110fc8576be3a26ba97c54d6d,PodSandboxId:9033dbf3
f482b0d83ba2729471830ee1a8ccacb8fb7b559ed1d5369c68e997dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739191527292709058,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe5c3bbf64c0d44e86b68ce3e723649,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c235ded61f125a9dca1ad1c07ba75db986b100d562786a247115e41ae28c089,PodSandboxId
:1c38113aac223a1df3b289cbf6eef03772a6c6281a2e14ee0e140ec69d682f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739191527267143875,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a34eb6c7537bc9ce0945c871faee69,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fce99d2670f91c928b1f11bbce1f8249634d0c240e2ef1e43c2b2eb3d52705d5,PodSandboxId:aace3b149399dd70
a17814628a04c67e5f99e677e73c2d4f3ac6990997e61e04,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739191527210679757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7b587573fe0fe71e1798a5ffb5ad68e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14b3dd9b4d1012bf02bed8abd7cc5533da46311690b84d321a4cb95f8c699fd,PodSandboxId:dff2045713aa2f6912ba0c42666102a4d796f96bab4af255b58221fd2ec4f2a
8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739191527161351667,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-692802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09005bcc2991e347e720bffc6fd78694,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=289e9f16-5cd1-4903-b0d6-f2474d1bea53 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	508e14e69341f       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   927aeda1ce40b       nginx
	a63e4a5e3037e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   48a5f9544462c       busybox
	592b1e6657eec       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   c9c50bb8b4673       ingress-nginx-controller-56d7c84fd4-jlz7l
	07765ac7990d2       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   3f0d62c0c84cb       ingress-nginx-admission-patch-xhq7r
	3122c82344f51       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   43a017951837b       ingress-nginx-admission-create-4v4dq
	4af60883b24c4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   40e9393404c41       amd-gpu-device-plugin-xm9dq
	57a58dffea011       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   27aa006e27b90       kube-ingress-dns-minikube
	d8cfbe8f2a95e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   df0ddb4ab451c       storage-provisioner
	7ebdeb9696512       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   54f01d8f95c3c       coredns-668d6bf9bc-sdttz
	5b8cc866cfea6       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   a354629f84f2a       kube-proxy-r5fh8
	d0858ed2c3fb9       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   9033dbf3f482b       kube-controller-manager-addons-692802
	5c235ded61f12       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   1c38113aac223       kube-scheduler-addons-692802
	fce99d2670f91       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   aace3b149399d       etcd-addons-692802
	f14b3dd9b4d10       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   dff2045713aa2       kube-apiserver-addons-692802
	
	
	==> coredns [7ebdeb9696512ef576b3bf3ba968ffcac78eb0bb7b8cf624c5d2dea3e05aac93] <==
	[INFO] 10.244.0.8:43515 - 5875 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000143126s
	[INFO] 10.244.0.8:43515 - 43062 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000235008s
	[INFO] 10.244.0.8:43515 - 62149 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000134952s
	[INFO] 10.244.0.8:43515 - 60750 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000078724s
	[INFO] 10.244.0.8:43515 - 57281 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000226918s
	[INFO] 10.244.0.8:43515 - 49861 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000111569s
	[INFO] 10.244.0.8:43515 - 61715 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000110015s
	[INFO] 10.244.0.8:33175 - 26387 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148602s
	[INFO] 10.244.0.8:33175 - 26701 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000172458s
	[INFO] 10.244.0.8:41303 - 23175 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140629s
	[INFO] 10.244.0.8:41303 - 23406 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110028s
	[INFO] 10.244.0.8:58214 - 22005 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000179484s
	[INFO] 10.244.0.8:58214 - 22253 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117666s
	[INFO] 10.244.0.8:59500 - 38696 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159536s
	[INFO] 10.244.0.8:59500 - 39142 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187039s
	[INFO] 10.244.0.23:49234 - 57409 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000434459s
	[INFO] 10.244.0.23:42998 - 40694 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000382949s
	[INFO] 10.244.0.23:48563 - 50296 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001125984s
	[INFO] 10.244.0.23:52328 - 30159 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000400093s
	[INFO] 10.244.0.23:37458 - 45878 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126183s
	[INFO] 10.244.0.23:53260 - 58019 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057868s
	[INFO] 10.244.0.23:59696 - 27683 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.007849085s
	[INFO] 10.244.0.23:57872 - 7619 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.008147662s
	[INFO] 10.244.0.27:58314 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000430909s
	[INFO] 10.244.0.27:41047 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000214987s
	
	
	==> describe nodes <==
	Name:               addons-692802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-692802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=addons-692802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_45_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-692802
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:45:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-692802
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:50:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:48:37 +0000   Mon, 10 Feb 2025 12:45:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:48:37 +0000   Mon, 10 Feb 2025 12:45:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:48:37 +0000   Mon, 10 Feb 2025 12:45:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:48:37 +0000   Mon, 10 Feb 2025 12:45:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    addons-692802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 58fd851bc1084acc9faafd465de81067
	  System UUID:                58fd851b-c108-4acc-9faa-fd465de81067
	  Boot ID:                    60491514-e8d7-4c0b-9653-5e9613a5bec1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     hello-world-app-7d9564db4-tnm2g              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-jlz7l    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m28s
	  kube-system                 amd-gpu-device-plugin-xm9dq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 coredns-668d6bf9bc-sdttz                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m37s
	  kube-system                 etcd-addons-692802                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m42s
	  kube-system                 kube-apiserver-addons-692802                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-controller-manager-addons-692802        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-proxy-r5fh8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-addons-692802                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m34s  kube-proxy       
	  Normal  Starting                 4m42s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s  kubelet          Node addons-692802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s  kubelet          Node addons-692802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s  kubelet          Node addons-692802 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m41s  kubelet          Node addons-692802 status is now: NodeReady
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-692802 event: Registered Node addons-692802 in Controller
	
	
	==> dmesg <==
	[  +5.303028] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.147880] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.056769] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.202067] kauditd_printk_skb: 124 callbacks suppressed
	[  +6.559725] kauditd_printk_skb: 76 callbacks suppressed
	[Feb10 12:46] kauditd_printk_skb: 10 callbacks suppressed
	[ +11.474656] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.917207] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.646207] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.227735] kauditd_printk_skb: 52 callbacks suppressed
	[Feb10 12:47] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.011840] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.619625] kauditd_printk_skb: 12 callbacks suppressed
	[ +13.817033] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.087702] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.599348] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.335600] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.095124] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.117293] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.017108] kauditd_printk_skb: 21 callbacks suppressed
	[Feb10 12:48] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.247789] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.727859] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.901165] kauditd_printk_skb: 9 callbacks suppressed
	[ +18.367055] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [fce99d2670f91c928b1f11bbce1f8249634d0c240e2ef1e43c2b2eb3d52705d5] <==
	{"level":"warn","ts":"2025-02-10T12:46:56.511885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.354232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-02-10T12:46:56.511905Z","caller":"traceutil/trace.go:171","msg":"trace[1665061776] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1051; }","duration":"212.387433ms","start":"2025-02-10T12:46:56.299506Z","end":"2025-02-10T12:46:56.511893Z","steps":["trace[1665061776] 'agreement among raft nodes before linearized reading'  (duration: 212.323678ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:47:05.051900Z","caller":"traceutil/trace.go:171","msg":"trace[1974616768] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"236.731101ms","start":"2025-02-10T12:47:04.815153Z","end":"2025-02-10T12:47:05.051884Z","steps":["trace[1974616768] 'read index received'  (duration: 236.604112ms)","trace[1974616768] 'applied index is now lower than readState.Index'  (duration: 126.624µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T12:47:05.052272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.215491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:47:05.052304Z","caller":"traceutil/trace.go:171","msg":"trace[1145362969] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1091; }","duration":"142.272121ms","start":"2025-02-10T12:47:04.910022Z","end":"2025-02-10T12:47:05.052295Z","steps":["trace[1145362969] 'agreement among raft nodes before linearized reading'  (duration: 142.132581ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:47:05.052537Z","caller":"traceutil/trace.go:171","msg":"trace[2017136207] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"314.145575ms","start":"2025-02-10T12:47:04.738377Z","end":"2025-02-10T12:47:05.052523Z","steps":["trace[2017136207] 'process raft request'  (duration: 313.419647ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:05.052701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:47:04.738361Z","time spent":"314.20144ms","remote":"127.0.0.1:50066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-692802\" mod_revision:1044 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-692802\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-692802\" > >"}
	{"level":"warn","ts":"2025-02-10T12:47:05.052081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.901372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:47:05.052862Z","caller":"traceutil/trace.go:171","msg":"trace[1635595604] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1091; }","duration":"237.710801ms","start":"2025-02-10T12:47:04.815122Z","end":"2025-02-10T12:47:05.052833Z","steps":["trace[1635595604] 'agreement among raft nodes before linearized reading'  (duration: 236.882598ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:47:37.633808Z","caller":"traceutil/trace.go:171","msg":"trace[1487337918] transaction","detail":"{read_only:false; response_revision:1300; number_of_response:1; }","duration":"167.653352ms","start":"2025-02-10T12:47:37.466141Z","end":"2025-02-10T12:47:37.633795Z","steps":["trace[1487337918] 'process raft request'  (duration: 167.336245ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:47:42.787129Z","caller":"traceutil/trace.go:171","msg":"trace[714111409] linearizableReadLoop","detail":"{readStateIndex:1389; appliedIndex:1388; }","duration":"217.634297ms","start":"2025-02-10T12:47:42.569481Z","end":"2025-02-10T12:47:42.787116Z","steps":["trace[714111409] 'read index received'  (duration: 217.383966ms)","trace[714111409] 'applied index is now lower than readState.Index'  (duration: 249.928µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:47:42.787339Z","caller":"traceutil/trace.go:171","msg":"trace[1065233962] transaction","detail":"{read_only:false; response_revision:1342; number_of_response:1; }","duration":"314.45347ms","start":"2025-02-10T12:47:42.472875Z","end":"2025-02-10T12:47:42.787329Z","steps":["trace[1065233962] 'process raft request'  (duration: 314.158148ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:42.787464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.909272ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-02-10T12:47:42.787486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:47:42.472863Z","time spent":"314.55554ms","remote":"127.0.0.1:49990","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1315,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1271 >> failure:<>"}
	{"level":"info","ts":"2025-02-10T12:47:42.787499Z","caller":"traceutil/trace.go:171","msg":"trace[1751083649] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1342; }","duration":"218.014213ms","start":"2025-02-10T12:47:42.569476Z","end":"2025-02-10T12:47:42.787491Z","steps":["trace[1751083649] 'agreement among raft nodes before linearized reading'  (duration: 217.858555ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:42.787652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.263845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-02-10T12:47:42.787671Z","caller":"traceutil/trace.go:171","msg":"trace[1066020907] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1342; }","duration":"148.304107ms","start":"2025-02-10T12:47:42.639360Z","end":"2025-02-10T12:47:42.787664Z","steps":["trace[1066020907] 'agreement among raft nodes before linearized reading'  (duration: 148.226066ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:42.787717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.79503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:47:42.787743Z","caller":"traceutil/trace.go:171","msg":"trace[1920461195] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1342; }","duration":"139.833857ms","start":"2025-02-10T12:47:42.647894Z","end":"2025-02-10T12:47:42.787728Z","steps":["trace[1920461195] 'agreement among raft nodes before linearized reading'  (duration: 139.803259ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:47:44.775487Z","caller":"traceutil/trace.go:171","msg":"trace[1022322244] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1395; }","duration":"205.464158ms","start":"2025-02-10T12:47:44.570013Z","end":"2025-02-10T12:47:44.775477Z","steps":["trace[1022322244] 'read index received'  (duration: 205.342243ms)","trace[1022322244] 'applied index is now lower than readState.Index'  (duration: 121.524µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:47:44.775686Z","caller":"traceutil/trace.go:171","msg":"trace[51072419] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"290.103253ms","start":"2025-02-10T12:47:44.485574Z","end":"2025-02-10T12:47:44.775677Z","steps":["trace[51072419] 'process raft request'  (duration: 289.817286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:44.775825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.805295ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:47:44.775845Z","caller":"traceutil/trace.go:171","msg":"trace[1790387727] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1349; }","duration":"205.833788ms","start":"2025-02-10T12:47:44.570005Z","end":"2025-02-10T12:47:44.775839Z","steps":["trace[1790387727] 'agreement among raft nodes before linearized reading'  (duration: 205.792964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:47:44.775982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.480951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:47:44.776001Z","caller":"traceutil/trace.go:171","msg":"trace[1953019516] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1349; }","duration":"127.520952ms","start":"2025-02-10T12:47:44.648472Z","end":"2025-02-10T12:47:44.775993Z","steps":["trace[1953019516] 'agreement among raft nodes before linearized reading'  (duration: 127.488801ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:50:14 up 5 min,  0 users,  load average: 0.50, 1.23, 0.64
	Linux addons-692802 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f14b3dd9b4d1012bf02bed8abd7cc5533da46311690b84d321a4cb95f8c699fd] <==
	I0210 12:46:22.581836       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0210 12:47:22.722973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.213:8443->192.168.39.1:49218: use of closed network connection
	E0210 12:47:22.922406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.213:8443->192.168.39.1:49242: use of closed network connection
	I0210 12:47:32.182780       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.219.191"}
	I0210 12:47:44.788083       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0210 12:47:45.846526       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0210 12:47:50.405522       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0210 12:47:50.636243       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.88.6"}
	I0210 12:47:51.725130       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0210 12:48:23.512840       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0210 12:48:24.541002       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:48:24.548016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:48:24.573558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:48:24.573745       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:48:24.608595       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:48:24.608689       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:48:24.650384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:48:24.650432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:48:24.669828       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:48:24.669884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0210 12:48:25.650753       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0210 12:48:25.670427       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0210 12:48:25.736274       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0210 12:48:26.817067       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0210 12:50:13.280341       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.169.31"}
	
	
	==> kube-controller-manager [d0858ed2c3fb989debc0d0e2987ea9773c8710d110fc8576be3a26ba97c54d6d] <==
	E0210 12:49:39.539781       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:49:41.889335       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:49:41.890575       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 12:49:41.891479       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:49:41.891537       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:49:49.204565       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:49:49.205520       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0210 12:49:49.206530       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:49:49.206581       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:50:03.660392       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:50:03.661552       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0210 12:50:03.662744       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:50:03.662812       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:50:11.187554       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:50:11.189469       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0210 12:50:11.190972       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:50:11.191032       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:50:12.754285       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:50:12.755434       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0210 12:50:12.756275       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:50:12.756334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0210 12:50:13.109892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.861683ms"
	I0210 12:50:13.126838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.379476ms"
	I0210 12:50:13.127111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="171.974µs"
	I0210 12:50:13.133574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="29.892µs"
	
	
	==> kube-proxy [5b8cc866cfea619a5dfe9dc22cb0fb18789fcf54b67cc815a3bf4958ad47ffbd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 12:45:39.778313       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 12:45:39.798498       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.213"]
	E0210 12:45:39.798564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:45:39.854322       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:45:39.854387       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:45:39.854409       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:45:39.859375       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:45:39.859694       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:45:39.859726       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:45:39.861133       1 config.go:199] "Starting service config controller"
	I0210 12:45:39.861350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:45:39.861398       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:45:39.861420       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:45:39.861939       1 config.go:329] "Starting node config controller"
	I0210 12:45:39.861968       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:45:39.973629       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:45:39.973678       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:45:39.973687       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5c235ded61f125a9dca1ad1c07ba75db986b100d562786a247115e41ae28c089] <==
	W0210 12:45:29.615832       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 12:45:29.616779       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:29.615921       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:45:29.616814       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:29.615955       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 12:45:29.616854       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:29.622345       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 12:45:29.622455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.426794       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 12:45:30.426849       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.439203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 12:45:30.439254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.449240       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:45:30.449303       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:45:30.474258       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 12:45:30.474307       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.524816       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 12:45:30.524869       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.694454       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:45:30.694504       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.708276       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:45:30.708324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:45:30.743980       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 12:45:30.744398       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:45:32.486978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 12:49:32 addons-692802 kubelet[1220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:49:32 addons-692802 kubelet[1220]: E0210 12:49:32.652030    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191772651391919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:32 addons-692802 kubelet[1220]: E0210 12:49:32.652076    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191772651391919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:42 addons-692802 kubelet[1220]: E0210 12:49:42.654671    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191782654299224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:42 addons-692802 kubelet[1220]: E0210 12:49:42.654952    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191782654299224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:52 addons-692802 kubelet[1220]: E0210 12:49:52.664726    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191792664102486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:52 addons-692802 kubelet[1220]: E0210 12:49:52.665410    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191792664102486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:49:56 addons-692802 kubelet[1220]: I0210 12:49:56.476050    1220 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xm9dq" secret="" err="secret \"gcp-auth\" not found"
	Feb 10 12:50:02 addons-692802 kubelet[1220]: E0210 12:50:02.669302    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191802668933153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:50:02 addons-692802 kubelet[1220]: E0210 12:50:02.669586    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191802668933153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:50:12 addons-692802 kubelet[1220]: E0210 12:50:12.672783    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191812672443285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:50:12 addons-692802 kubelet[1220]: E0210 12:50:12.672810    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739191812672443285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102203    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="8bde7b68-9847-4697-82f5-91743733dd41" containerName="volume-snapshot-controller"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102401    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="csi-snapshotter"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102503    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="f3f4d475-4a8c-4d74-94e3-2efd26a29e66" containerName="task-pv-container"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102617    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="55f71143-7a59-4a9b-8744-1e671c036989" containerName="csi-attacher"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102649    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="81c92983-579b-46b9-9fc8-3672c116bcab" containerName="local-path-provisioner"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102737    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="0f555d79-2125-44da-a9c1-83df4b12a875" containerName="volume-snapshot-controller"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102776    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="node-driver-registrar"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102878    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="liveness-probe"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.102910    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="csi-external-health-monitor-controller"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.103003    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="hostpath"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.103034    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="abdc4146-28ad-422c-99ff-f2360ade4695" containerName="csi-resizer"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.103103    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9bd5d7e-549e-4ec2-9e88-e8d9617b4842" containerName="csi-provisioner"
	Feb 10 12:50:13 addons-692802 kubelet[1220]: I0210 12:50:13.205109    1220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6zdk\" (UniqueName: \"kubernetes.io/projected/3feadeed-02d9-4cd3-a6e1-51414240bf87-kube-api-access-k6zdk\") pod \"hello-world-app-7d9564db4-tnm2g\" (UID: \"3feadeed-02d9-4cd3-a6e1-51414240bf87\") " pod="default/hello-world-app-7d9564db4-tnm2g"
	
	
	==> storage-provisioner [d8cfbe8f2a95e301a377ca360b4d95d732d5e028b850ec1c4ff5fdd77df19fad] <==
	I0210 12:45:44.902940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 12:45:45.028821       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 12:45:45.028890       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 12:45:45.345320       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 12:45:45.345473       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-692802_40482b3a-9b2d-465b-9c1a-75ce3a4f9b0c!
	I0210 12:45:45.345561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0afd899-e4a3-49fa-9f58-1034e3ba00db", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-692802_40482b3a-9b2d-465b-9c1a-75ce3a4f9b0c became leader
	I0210 12:45:45.648502       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-692802_40482b3a-9b2d-465b-9c1a-75ce3a4f9b0c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-692802 -n addons-692802
helpers_test.go:261: (dbg) Run:  kubectl --context addons-692802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-tnm2g ingress-nginx-admission-create-4v4dq ingress-nginx-admission-patch-xhq7r
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-692802 describe pod hello-world-app-7d9564db4-tnm2g ingress-nginx-admission-create-4v4dq ingress-nginx-admission-patch-xhq7r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-692802 describe pod hello-world-app-7d9564db4-tnm2g ingress-nginx-admission-create-4v4dq ingress-nginx-admission-patch-xhq7r: exit status 1 (67.081925ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-tnm2g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-692802/192.168.39.213
	Start Time:       Mon, 10 Feb 2025 12:50:13 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6zdk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k6zdk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-tnm2g to addons-692802
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4v4dq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xhq7r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-692802 describe pod hello-world-app-7d9564db4-tnm2g ingress-nginx-admission-create-4v4dq ingress-nginx-admission-patch-xhq7r: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable ingress-dns --alsologtostderr -v=1: (1.767012571s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable ingress --alsologtostderr -v=1: (7.720330881s)
--- FAIL: TestAddons/parallel/Ingress (155.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh pgrep buildkitd: exit status 1 (227.883138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image build -t localhost/my-image:functional-729385 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 image build -t localhost/my-image:functional-729385 testdata/build --alsologtostderr: (7.515878476s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-729385 image build -t localhost/my-image:functional-729385 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ef2646b9d15
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-729385
--> 896564337d8
Successfully tagged localhost/my-image:functional-729385
896564337d88f1a9bd02276eb9db569eff7df54a5f583fdcc2f51dd0480a543d
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-729385 image build -t localhost/my-image:functional-729385 testdata/build --alsologtostderr:
I0210 12:56:02.073802  597138 out.go:345] Setting OutFile to fd 1 ...
I0210 12:56:02.073918  597138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:02.073932  597138 out.go:358] Setting ErrFile to fd 2...
I0210 12:56:02.073937  597138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:02.074161  597138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
I0210 12:56:02.074779  597138 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:02.075425  597138 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:02.075835  597138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:02.075905  597138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:02.093145  597138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45609
I0210 12:56:02.093651  597138 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:02.094361  597138 main.go:141] libmachine: Using API Version  1
I0210 12:56:02.094390  597138 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:02.094743  597138 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:02.094960  597138 main.go:141] libmachine: (functional-729385) Calling .GetState
I0210 12:56:02.096949  597138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:02.097002  597138 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:02.112614  597138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
I0210 12:56:02.113070  597138 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:02.113681  597138 main.go:141] libmachine: Using API Version  1
I0210 12:56:02.113719  597138 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:02.114133  597138 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:02.114375  597138 main.go:141] libmachine: (functional-729385) Calling .DriverName
I0210 12:56:02.114593  597138 ssh_runner.go:195] Run: systemctl --version
I0210 12:56:02.114636  597138 main.go:141] libmachine: (functional-729385) Calling .GetSSHHostname
I0210 12:56:02.117895  597138 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:02.118587  597138 main.go:141] libmachine: (functional-729385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:13:08", ip: ""} in network mk-functional-729385: {Iface:virbr1 ExpiryTime:2025-02-10 13:53:05 +0000 UTC Type:0 Mac:52:54:00:ed:13:08 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-729385 Clientid:01:52:54:00:ed:13:08}
I0210 12:56:02.118672  597138 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined IP address 192.168.39.70 and MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:02.118861  597138 main.go:141] libmachine: (functional-729385) Calling .GetSSHPort
I0210 12:56:02.119057  597138 main.go:141] libmachine: (functional-729385) Calling .GetSSHKeyPath
I0210 12:56:02.119251  597138 main.go:141] libmachine: (functional-729385) Calling .GetSSHUsername
I0210 12:56:02.119421  597138 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/functional-729385/id_rsa Username:docker}
I0210 12:56:02.238245  597138 build_images.go:161] Building image from path: /tmp/build.935599290.tar
I0210 12:56:02.238311  597138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 12:56:02.269671  597138 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.935599290.tar
I0210 12:56:02.288053  597138 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.935599290.tar: stat -c "%s %y" /var/lib/minikube/build/build.935599290.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.935599290.tar': No such file or directory
I0210 12:56:02.288098  597138 ssh_runner.go:362] scp /tmp/build.935599290.tar --> /var/lib/minikube/build/build.935599290.tar (3072 bytes)
I0210 12:56:02.356586  597138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.935599290
I0210 12:56:02.375057  597138 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.935599290 -xf /var/lib/minikube/build/build.935599290.tar
I0210 12:56:02.390507  597138 crio.go:315] Building image: /var/lib/minikube/build/build.935599290
I0210 12:56:02.390624  597138 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-729385 /var/lib/minikube/build/build.935599290 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0210 12:56:09.478527  597138 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-729385 /var/lib/minikube/build/build.935599290 --cgroup-manager=cgroupfs: (7.087868397s)
I0210 12:56:09.478639  597138 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.935599290
I0210 12:56:09.491871  597138 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.935599290.tar
I0210 12:56:09.525765  597138 build_images.go:217] Built localhost/my-image:functional-729385 from /tmp/build.935599290.tar
I0210 12:56:09.525805  597138 build_images.go:133] succeeded building to: functional-729385
I0210 12:56:09.525809  597138 build_images.go:134] failed building to: 
I0210 12:56:09.525878  597138 main.go:141] libmachine: Making call to close driver server
I0210 12:56:09.525896  597138 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:09.526279  597138 main.go:141] libmachine: (functional-729385) DBG | Closing plugin on server side
I0210 12:56:09.526278  597138 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:09.526315  597138 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:09.526326  597138 main.go:141] libmachine: Making call to close driver server
I0210 12:56:09.526338  597138 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:09.526666  597138 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:09.526683  597138 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
functional_test.go:468: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 image ls: (2.285899143s)
functional_test.go:463: expected "localhost/my-image:functional-729385" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (10.03s)

                                                
                                    
x
+
TestPreload (178.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-233225 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-233225 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.030739304s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-233225 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-233225 image pull gcr.io/k8s-minikube/busybox: (3.216033535s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-233225
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-233225: (7.30284212s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-233225 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0210 13:40:33.642698  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-233225 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.165950778s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-233225 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-10 13:41:38.139573421 +0000 UTC m=+3448.515463094
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-233225 -n test-preload-233225
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-233225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-233225 logs -n 25: (1.129506094s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-149216 ssh -n                                                                 | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	|         | multinode-149216-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-149216 ssh -n multinode-149216 sudo cat                                       | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	|         | /home/docker/cp-test_multinode-149216-m03_multinode-149216.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-149216 cp multinode-149216-m03:/home/docker/cp-test.txt                       | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	|         | multinode-149216-m02:/home/docker/cp-test_multinode-149216-m03_multinode-149216-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-149216 ssh -n                                                                 | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	|         | multinode-149216-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-149216 ssh -n multinode-149216-m02 sudo cat                                   | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	|         | /home/docker/cp-test_multinode-149216-m03_multinode-149216-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-149216 node stop m03                                                          | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:26 UTC |
	| node    | multinode-149216 node start                                                             | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:26 UTC | 10 Feb 25 13:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-149216                                                                | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:27 UTC |                     |
	| stop    | -p multinode-149216                                                                     | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:27 UTC | 10 Feb 25 13:30 UTC |
	| start   | -p multinode-149216                                                                     | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:30 UTC | 10 Feb 25 13:32 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-149216                                                                | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:32 UTC |                     |
	| node    | multinode-149216 node delete                                                            | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:32 UTC | 10 Feb 25 13:32 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-149216 stop                                                                   | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:32 UTC | 10 Feb 25 13:35 UTC |
	| start   | -p multinode-149216                                                                     | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:35 UTC | 10 Feb 25 13:37 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-149216                                                                | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:37 UTC |                     |
	| start   | -p multinode-149216-m02                                                                 | multinode-149216-m02 | jenkins | v1.35.0 | 10 Feb 25 13:37 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-149216-m03                                                                 | multinode-149216-m03 | jenkins | v1.35.0 | 10 Feb 25 13:37 UTC | 10 Feb 25 13:38 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-149216                                                                 | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:38 UTC |                     |
	| delete  | -p multinode-149216-m03                                                                 | multinode-149216-m03 | jenkins | v1.35.0 | 10 Feb 25 13:38 UTC | 10 Feb 25 13:38 UTC |
	| delete  | -p multinode-149216                                                                     | multinode-149216     | jenkins | v1.35.0 | 10 Feb 25 13:38 UTC | 10 Feb 25 13:38 UTC |
	| start   | -p test-preload-233225                                                                  | test-preload-233225  | jenkins | v1.35.0 | 10 Feb 25 13:38 UTC | 10 Feb 25 13:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-233225 image pull                                                          | test-preload-233225  | jenkins | v1.35.0 | 10 Feb 25 13:40 UTC | 10 Feb 25 13:40 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-233225                                                                  | test-preload-233225  | jenkins | v1.35.0 | 10 Feb 25 13:40 UTC | 10 Feb 25 13:40 UTC |
	| start   | -p test-preload-233225                                                                  | test-preload-233225  | jenkins | v1.35.0 | 10 Feb 25 13:40 UTC | 10 Feb 25 13:41 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-233225 image list                                                          | test-preload-233225  | jenkins | v1.35.0 | 10 Feb 25 13:41 UTC | 10 Feb 25 13:41 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:40:29
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:40:29.799308  619525 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:40:29.799406  619525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:40:29.799413  619525 out.go:358] Setting ErrFile to fd 2...
	I0210 13:40:29.799418  619525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:40:29.799627  619525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:40:29.800145  619525 out.go:352] Setting JSON to false
	I0210 13:40:29.801084  619525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12175,"bootTime":1739182655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:40:29.801192  619525 start.go:139] virtualization: kvm guest
	I0210 13:40:29.803228  619525 out.go:177] * [test-preload-233225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:40:29.804448  619525 notify.go:220] Checking for updates...
	I0210 13:40:29.804453  619525 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:40:29.805767  619525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:40:29.806927  619525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:40:29.808057  619525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:40:29.809161  619525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:40:29.810262  619525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:40:29.811677  619525 config.go:182] Loaded profile config "test-preload-233225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 13:40:29.812091  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:40:29.812154  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:40:29.827188  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40455
	I0210 13:40:29.827659  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:40:29.828265  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:40:29.828310  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:40:29.828716  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:40:29.828960  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:40:29.830483  619525 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 13:40:29.831679  619525 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:40:29.832033  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:40:29.832083  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:40:29.846580  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0210 13:40:29.846945  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:40:29.847414  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:40:29.847436  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:40:29.847767  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:40:29.847986  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:40:29.881373  619525 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:40:29.882527  619525 start.go:297] selected driver: kvm2
	I0210 13:40:29.882541  619525 start.go:901] validating driver "kvm2" against &{Name:test-preload-233225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233225
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:40:29.882640  619525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:40:29.883370  619525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:40:29.883446  619525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:40:29.898013  619525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:40:29.898394  619525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:40:29.898426  619525 cni.go:84] Creating CNI manager for ""
	I0210 13:40:29.898472  619525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:40:29.898521  619525 start.go:340] cluster config:
	{Name:test-preload-233225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:40:29.898635  619525 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:40:29.900165  619525 out.go:177] * Starting "test-preload-233225" primary control-plane node in "test-preload-233225" cluster
	I0210 13:40:29.901268  619525 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:40:30.008644  619525 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 13:40:30.008682  619525 cache.go:56] Caching tarball of preloaded images
	I0210 13:40:30.008839  619525 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:40:30.010505  619525 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0210 13:40:30.011655  619525 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:40:30.117830  619525 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0210 13:40:42.084961  619525 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:40:42.085063  619525 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0210 13:40:42.950455  619525 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0210 13:40:42.950613  619525 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/config.json ...
	I0210 13:40:42.950891  619525 start.go:360] acquireMachinesLock for test-preload-233225: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:40:42.950978  619525 start.go:364] duration metric: took 55.947µs to acquireMachinesLock for "test-preload-233225"
	I0210 13:40:42.951002  619525 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:40:42.951010  619525 fix.go:54] fixHost starting: 
	I0210 13:40:42.951304  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:40:42.951354  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:40:42.966548  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0210 13:40:42.967105  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:40:42.967620  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:40:42.967643  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:40:42.967936  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:40:42.968122  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:40:42.968267  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetState
	I0210 13:40:42.969883  619525 fix.go:112] recreateIfNeeded on test-preload-233225: state=Stopped err=<nil>
	I0210 13:40:42.969922  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	W0210 13:40:42.970074  619525 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:40:42.972081  619525 out.go:177] * Restarting existing kvm2 VM for "test-preload-233225" ...
	I0210 13:40:42.973485  619525 main.go:141] libmachine: (test-preload-233225) Calling .Start
	I0210 13:40:42.973656  619525 main.go:141] libmachine: (test-preload-233225) starting domain...
	I0210 13:40:42.973671  619525 main.go:141] libmachine: (test-preload-233225) ensuring networks are active...
	I0210 13:40:42.974573  619525 main.go:141] libmachine: (test-preload-233225) Ensuring network default is active
	I0210 13:40:42.975030  619525 main.go:141] libmachine: (test-preload-233225) Ensuring network mk-test-preload-233225 is active
	I0210 13:40:42.975432  619525 main.go:141] libmachine: (test-preload-233225) getting domain XML...
	I0210 13:40:42.976263  619525 main.go:141] libmachine: (test-preload-233225) creating domain...
	I0210 13:40:44.165172  619525 main.go:141] libmachine: (test-preload-233225) waiting for IP...
	I0210 13:40:44.166005  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:44.166384  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:44.166465  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:44.166381  619593 retry.go:31] will retry after 263.078984ms: waiting for domain to come up
	I0210 13:40:44.430911  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:44.431398  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:44.431427  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:44.431359  619593 retry.go:31] will retry after 341.987135ms: waiting for domain to come up
	I0210 13:40:44.775065  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:44.775504  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:44.775535  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:44.775459  619593 retry.go:31] will retry after 433.014805ms: waiting for domain to come up
	I0210 13:40:45.209986  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:45.210392  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:45.210423  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:45.210349  619593 retry.go:31] will retry after 370.52302ms: waiting for domain to come up
	I0210 13:40:45.582723  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:45.583093  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:45.583117  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:45.583043  619593 retry.go:31] will retry after 684.752193ms: waiting for domain to come up
	I0210 13:40:46.268854  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:46.269172  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:46.269196  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:46.269141  619593 retry.go:31] will retry after 790.854338ms: waiting for domain to come up
	I0210 13:40:47.062090  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:47.062504  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:47.062534  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:47.062459  619593 retry.go:31] will retry after 807.234854ms: waiting for domain to come up
	I0210 13:40:47.870850  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:47.871272  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:47.871302  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:47.871215  619593 retry.go:31] will retry after 911.746132ms: waiting for domain to come up
	I0210 13:40:48.784198  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:48.784598  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:48.784687  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:48.784601  619593 retry.go:31] will retry after 1.411787871s: waiting for domain to come up
	I0210 13:40:50.198676  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:50.199087  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:50.199113  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:50.199060  619593 retry.go:31] will retry after 1.542194788s: waiting for domain to come up
	I0210 13:40:51.743841  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:51.744213  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:51.744268  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:51.744215  619593 retry.go:31] will retry after 2.343005953s: waiting for domain to come up
	I0210 13:40:54.089007  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:54.089462  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:54.089489  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:54.089415  619593 retry.go:31] will retry after 2.713254113s: waiting for domain to come up
	I0210 13:40:56.806212  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:40:56.806631  619525 main.go:141] libmachine: (test-preload-233225) DBG | unable to find current IP address of domain test-preload-233225 in network mk-test-preload-233225
	I0210 13:40:56.806655  619525 main.go:141] libmachine: (test-preload-233225) DBG | I0210 13:40:56.806581  619593 retry.go:31] will retry after 3.969062162s: waiting for domain to come up
	I0210 13:41:00.778969  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.779427  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has current primary IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.779449  619525 main.go:141] libmachine: (test-preload-233225) found domain IP: 192.168.39.141
	I0210 13:41:00.779459  619525 main.go:141] libmachine: (test-preload-233225) reserving static IP address...
	I0210 13:41:00.779893  619525 main.go:141] libmachine: (test-preload-233225) reserved static IP address 192.168.39.141 for domain test-preload-233225
	I0210 13:41:00.779939  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "test-preload-233225", mac: "52:54:00:6f:14:7e", ip: "192.168.39.141"} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:00.779951  619525 main.go:141] libmachine: (test-preload-233225) waiting for SSH...
	I0210 13:41:00.779983  619525 main.go:141] libmachine: (test-preload-233225) DBG | skip adding static IP to network mk-test-preload-233225 - found existing host DHCP lease matching {name: "test-preload-233225", mac: "52:54:00:6f:14:7e", ip: "192.168.39.141"}
	I0210 13:41:00.780003  619525 main.go:141] libmachine: (test-preload-233225) DBG | Getting to WaitForSSH function...
	I0210 13:41:00.782192  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.782557  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:00.782586  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.782701  619525 main.go:141] libmachine: (test-preload-233225) DBG | Using SSH client type: external
	I0210 13:41:00.782725  619525 main.go:141] libmachine: (test-preload-233225) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa (-rw-------)
	I0210 13:41:00.782762  619525 main.go:141] libmachine: (test-preload-233225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:41:00.782781  619525 main.go:141] libmachine: (test-preload-233225) DBG | About to run SSH command:
	I0210 13:41:00.782794  619525 main.go:141] libmachine: (test-preload-233225) DBG | exit 0
	I0210 13:41:00.908699  619525 main.go:141] libmachine: (test-preload-233225) DBG | SSH cmd err, output: <nil>: 
	I0210 13:41:00.909041  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetConfigRaw
	I0210 13:41:00.909681  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetIP
	I0210 13:41:00.912346  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.912689  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:00.912713  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.912927  619525 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/config.json ...
	I0210 13:41:00.913111  619525 machine.go:93] provisionDockerMachine start ...
	I0210 13:41:00.913130  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:00.913328  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:00.915572  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.915912  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:00.915943  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:00.916075  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:00.916272  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:00.916476  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:00.916631  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:00.916806  619525 main.go:141] libmachine: Using SSH client type: native
	I0210 13:41:00.917012  619525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0210 13:41:00.917025  619525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:41:01.020754  619525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 13:41:01.020789  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetMachineName
	I0210 13:41:01.021050  619525 buildroot.go:166] provisioning hostname "test-preload-233225"
	I0210 13:41:01.021087  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetMachineName
	I0210 13:41:01.021330  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.023810  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.024186  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.024223  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.024349  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.024544  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.024708  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.024824  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.024977  619525 main.go:141] libmachine: Using SSH client type: native
	I0210 13:41:01.025218  619525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0210 13:41:01.025235  619525 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-233225 && echo "test-preload-233225" | sudo tee /etc/hostname
	I0210 13:41:01.143812  619525 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-233225
	
	I0210 13:41:01.143845  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.146656  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.146944  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.146975  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.147179  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.147391  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.147556  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.147679  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.147834  619525 main.go:141] libmachine: Using SSH client type: native
	I0210 13:41:01.148050  619525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0210 13:41:01.148076  619525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-233225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-233225/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-233225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:41:01.261589  619525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:41:01.261624  619525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:41:01.261645  619525 buildroot.go:174] setting up certificates
	I0210 13:41:01.261659  619525 provision.go:84] configureAuth start
	I0210 13:41:01.261669  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetMachineName
	I0210 13:41:01.261984  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetIP
	I0210 13:41:01.264550  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.264883  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.264931  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.265095  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.267381  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.267729  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.267759  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.267937  619525 provision.go:143] copyHostCerts
	I0210 13:41:01.268005  619525 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:41:01.268019  619525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:41:01.268086  619525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:41:01.268178  619525 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:41:01.268186  619525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:41:01.268210  619525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:41:01.268344  619525 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:41:01.268356  619525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:41:01.268386  619525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:41:01.268443  619525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.test-preload-233225 san=[127.0.0.1 192.168.39.141 localhost minikube test-preload-233225]
	I0210 13:41:01.372446  619525 provision.go:177] copyRemoteCerts
	I0210 13:41:01.372527  619525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:41:01.372554  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.375405  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.375784  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.375817  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.376014  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.376214  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.376401  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.376533  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:01.458736  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0210 13:41:01.484100  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:41:01.513794  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:41:01.539802  619525 provision.go:87] duration metric: took 278.128407ms to configureAuth
	I0210 13:41:01.539834  619525 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:41:01.540013  619525 config.go:182] Loaded profile config "test-preload-233225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0210 13:41:01.540104  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.542780  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.543076  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.543136  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.543287  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.543488  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.543651  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.543765  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.543936  619525 main.go:141] libmachine: Using SSH client type: native
	I0210 13:41:01.544106  619525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0210 13:41:01.544121  619525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:41:01.767831  619525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:41:01.767868  619525 machine.go:96] duration metric: took 854.745593ms to provisionDockerMachine
	I0210 13:41:01.767884  619525 start.go:293] postStartSetup for "test-preload-233225" (driver="kvm2")
	I0210 13:41:01.767900  619525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:41:01.767926  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:01.768306  619525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:41:01.768356  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.771190  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.771571  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.771611  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.771752  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.771944  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.772130  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.772301  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:01.855421  619525 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:41:01.860005  619525 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:41:01.860039  619525 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:41:01.860127  619525 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:41:01.860200  619525 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:41:01.860321  619525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:41:01.870001  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:41:01.893952  619525 start.go:296] duration metric: took 126.051667ms for postStartSetup
	I0210 13:41:01.893994  619525 fix.go:56] duration metric: took 18.942985961s for fixHost
	I0210 13:41:01.894017  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:01.896342  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.896604  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:01.896628  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:01.896837  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:01.897052  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.897179  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:01.897280  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:01.897393  619525 main.go:141] libmachine: Using SSH client type: native
	I0210 13:41:01.897548  619525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0210 13:41:01.897558  619525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:41:02.001294  619525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739194861.974037904
	
	I0210 13:41:02.001322  619525 fix.go:216] guest clock: 1739194861.974037904
	I0210 13:41:02.001330  619525 fix.go:229] Guest: 2025-02-10 13:41:01.974037904 +0000 UTC Remote: 2025-02-10 13:41:01.893998277 +0000 UTC m=+32.132611883 (delta=80.039627ms)
	I0210 13:41:02.001351  619525 fix.go:200] guest clock delta is within tolerance: 80.039627ms
	I0210 13:41:02.001356  619525 start.go:83] releasing machines lock for "test-preload-233225", held for 19.050364493s
	I0210 13:41:02.001380  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:02.001657  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetIP
	I0210 13:41:02.004177  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.004505  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:02.004546  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.004768  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:02.005273  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:02.005469  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:02.005571  619525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:41:02.005603  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:02.005730  619525 ssh_runner.go:195] Run: cat /version.json
	I0210 13:41:02.005756  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:02.008064  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.008423  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:02.008455  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.008477  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.008622  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:02.008800  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:02.008920  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:02.008946  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:02.008979  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:02.009082  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:02.009153  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:02.009220  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:02.009335  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:02.009466  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:02.085475  619525 ssh_runner.go:195] Run: systemctl --version
	I0210 13:41:02.112832  619525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:41:02.260687  619525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:41:02.267393  619525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:41:02.267482  619525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:41:02.285328  619525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:41:02.285367  619525 start.go:495] detecting cgroup driver to use...
	I0210 13:41:02.285449  619525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:41:02.302876  619525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:41:02.317136  619525 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:41:02.317221  619525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:41:02.332715  619525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:41:02.348140  619525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:41:02.459550  619525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:41:02.615875  619525 docker.go:233] disabling docker service ...
	I0210 13:41:02.615967  619525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:41:02.630369  619525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:41:02.644015  619525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:41:02.759383  619525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:41:02.873819  619525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:41:02.887896  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:41:02.906556  619525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0210 13:41:02.906628  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.916862  619525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:41:02.916922  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.926956  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.936748  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.946834  619525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:41:02.956911  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.966825  619525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.983817  619525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:41:02.993992  619525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:41:03.003333  619525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:41:03.003393  619525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:41:03.016566  619525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:41:03.025782  619525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:41:03.138606  619525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:41:03.235760  619525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:41:03.235829  619525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:41:03.241415  619525 start.go:563] Will wait 60s for crictl version
	I0210 13:41:03.241486  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:03.245332  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:41:03.289315  619525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:41:03.289402  619525 ssh_runner.go:195] Run: crio --version
	I0210 13:41:03.318022  619525 ssh_runner.go:195] Run: crio --version
	I0210 13:41:03.349712  619525 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0210 13:41:03.351013  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetIP
	I0210 13:41:03.353868  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:03.354207  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:03.354245  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:03.354483  619525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:41:03.358921  619525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:41:03.372339  619525 kubeadm.go:883] updating cluster {Name:test-preload-233225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233225 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:41:03.372466  619525 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0210 13:41:03.372513  619525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:41:03.408608  619525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 13:41:03.408689  619525 ssh_runner.go:195] Run: which lz4
	I0210 13:41:03.412972  619525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:41:03.417203  619525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:41:03.417239  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0210 13:41:04.960366  619525 crio.go:462] duration metric: took 1.547442245s to copy over tarball
	I0210 13:41:04.960454  619525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:41:07.305985  619525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345492574s)
	I0210 13:41:07.306017  619525 crio.go:469] duration metric: took 2.345617261s to extract the tarball
	I0210 13:41:07.306024  619525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:41:07.347763  619525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:41:07.391745  619525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0210 13:41:07.391774  619525 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:41:07.391838  619525 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:41:07.391923  619525 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.391953  619525 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.391958  619525 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.391977  619525 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0210 13:41:07.391870  619525 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.391868  619525 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:07.391944  619525 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:07.393325  619525 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.393325  619525 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.393427  619525 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:41:07.393326  619525 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:07.393429  619525 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.393455  619525 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.393482  619525 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:07.393455  619525 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0210 13:41:07.569795  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.573709  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.582610  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.583826  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0210 13:41:07.589030  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:07.617225  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.640723  619525 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0210 13:41:07.640785  619525 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.640840  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.663462  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:07.675620  619525 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0210 13:41:07.675673  619525 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.675743  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.725410  619525 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0210 13:41:07.725461  619525 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.725511  619525 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0210 13:41:07.725557  619525 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0210 13:41:07.725596  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.725516  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.741579  619525 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0210 13:41:07.741626  619525 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:07.741668  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.741701  619525 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0210 13:41:07.741744  619525 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.741768  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.741783  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.765139  619525 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0210 13:41:07.765181  619525 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:07.765196  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.765241  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.765257  619525 ssh_runner.go:195] Run: which crictl
	I0210 13:41:07.765286  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:41:07.765319  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.812903  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:07.812996  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.911458  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:07.911593  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:41:07.911594  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:07.911685  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:07.911697  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:07.937300  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0210 13:41:07.937333  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:08.058442  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0210 13:41:08.060093  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0210 13:41:08.060142  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0210 13:41:08.060223  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:08.060401  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0210 13:41:08.115948  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0210 13:41:08.116036  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0210 13:41:08.116079  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:41:08.215767  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0210 13:41:08.215893  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:41:08.217777  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0210 13:41:08.217855  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0210 13:41:08.217879  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0210 13:41:08.217905  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0210 13:41:08.217879  619525 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0210 13:41:08.217966  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:41:08.217983  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:41:08.218001  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0210 13:41:08.218020  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0210 13:41:08.218033  619525 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:41:08.218066  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0210 13:41:08.218077  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:41:08.221524  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0210 13:41:08.269178  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0210 13:41:08.269244  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0210 13:41:08.269287  619525 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0210 13:41:08.269396  619525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:41:08.491941  619525 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:41:10.797051  619525 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.579146251s)
	I0210 13:41:10.797077  619525 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.578979979s)
	I0210 13:41:10.797097  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0210 13:41:10.797103  619525 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.579105522s)
	I0210 13:41:10.797127  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0210 13:41:10.797106  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0210 13:41:10.797157  619525 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:41:10.797163  619525 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.527746311s)
	I0210 13:41:10.797187  619525 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0210 13:41:10.797210  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0210 13:41:10.797255  619525 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.305281677s)
	I0210 13:41:11.541626  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0210 13:41:11.541667  619525 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:41:11.541716  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0210 13:41:12.385343  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0210 13:41:12.385400  619525 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:41:12.385458  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0210 13:41:14.633805  619525 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.248318176s)
	I0210 13:41:14.633835  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0210 13:41:14.633878  619525 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0210 13:41:14.633934  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0210 13:41:14.772080  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0210 13:41:14.772145  619525 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:41:14.772229  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0210 13:41:15.217295  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0210 13:41:15.217347  619525 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:41:15.217405  619525 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0210 13:41:15.967020  619525 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0210 13:41:15.967082  619525 cache_images.go:123] Successfully loaded all cached images
	I0210 13:41:15.967100  619525 cache_images.go:92] duration metric: took 8.575303495s to LoadCachedImages
	I0210 13:41:15.967116  619525 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.24.4 crio true true} ...
	I0210 13:41:15.967231  619525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-233225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-233225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:41:15.967307  619525 ssh_runner.go:195] Run: crio config
	I0210 13:41:16.022098  619525 cni.go:84] Creating CNI manager for ""
	I0210 13:41:16.022120  619525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:41:16.022130  619525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:41:16.022150  619525 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-233225 NodeName:test-preload-233225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:41:16.022293  619525 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-233225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:41:16.022358  619525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0210 13:41:16.033314  619525 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:41:16.033379  619525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:41:16.043316  619525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0210 13:41:16.060358  619525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:41:16.077109  619525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0210 13:41:16.094236  619525 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I0210 13:41:16.098105  619525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:41:16.110497  619525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:41:16.241049  619525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:41:16.258880  619525 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225 for IP: 192.168.39.141
	I0210 13:41:16.258904  619525 certs.go:194] generating shared ca certs ...
	I0210 13:41:16.258923  619525 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:41:16.259088  619525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:41:16.259131  619525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:41:16.259142  619525 certs.go:256] generating profile certs ...
	I0210 13:41:16.259240  619525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/client.key
	I0210 13:41:16.259304  619525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/apiserver.key.bc0d9b87
	I0210 13:41:16.259344  619525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/proxy-client.key
	I0210 13:41:16.259452  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:41:16.259484  619525 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:41:16.259494  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:41:16.259518  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:41:16.259540  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:41:16.259560  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:41:16.259596  619525 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:41:16.260341  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:41:16.294976  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:41:16.320196  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:41:16.352047  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:41:16.379553  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0210 13:41:16.403291  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:41:16.429301  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:41:16.453527  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:41:16.489762  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:41:16.513047  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:41:16.536292  619525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:41:16.559218  619525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:41:16.576792  619525 ssh_runner.go:195] Run: openssl version
	I0210 13:41:16.582887  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:41:16.593980  619525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:41:16.598711  619525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:41:16.598779  619525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:41:16.604814  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:41:16.616180  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:41:16.627618  619525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:41:16.632497  619525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:41:16.632553  619525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:41:16.638513  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:41:16.649781  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:41:16.661346  619525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:41:16.666210  619525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:41:16.666271  619525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:41:16.672139  619525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:41:16.683638  619525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:41:16.688541  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:41:16.694841  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:41:16.700903  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:41:16.706974  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:41:16.713083  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:41:16.718966  619525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:41:16.725051  619525 kubeadm.go:392] StartCluster: {Name:test-preload-233225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-233225 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:41:16.725135  619525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:41:16.725202  619525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:41:16.769516  619525 cri.go:89] found id: ""
	I0210 13:41:16.769605  619525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:41:16.780115  619525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 13:41:16.780138  619525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 13:41:16.780190  619525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 13:41:16.789961  619525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:41:16.790421  619525 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-233225" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:41:16.790541  619525 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-233225" cluster setting kubeconfig missing "test-preload-233225" context setting]
	I0210 13:41:16.790795  619525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:41:16.791342  619525 kapi.go:59] client config for test-preload-233225: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/client.crt", KeyFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/client.key", CAFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 13:41:16.791729  619525 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 13:41:16.791744  619525 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 13:41:16.791749  619525 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 13:41:16.791753  619525 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 13:41:16.792119  619525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 13:41:16.801608  619525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.141
	I0210 13:41:16.801638  619525 kubeadm.go:1160] stopping kube-system containers ...
	I0210 13:41:16.801652  619525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 13:41:16.801699  619525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:41:16.850453  619525 cri.go:89] found id: ""
	I0210 13:41:16.850573  619525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 13:41:16.868216  619525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:41:16.878377  619525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:41:16.878398  619525 kubeadm.go:157] found existing configuration files:
	
	I0210 13:41:16.878436  619525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:41:16.887530  619525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:41:16.887582  619525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:41:16.897145  619525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:41:16.906082  619525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:41:16.906130  619525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:41:16.915473  619525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:41:16.924629  619525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:41:16.924681  619525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:41:16.933993  619525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:41:16.943032  619525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:41:16.943097  619525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:41:16.952453  619525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:41:16.961998  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:17.054352  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:17.848527  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:18.114986  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:18.190955  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:18.337159  619525 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:41:18.337247  619525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:41:18.837328  619525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:41:19.337563  619525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:41:19.353320  619525 api_server.go:72] duration metric: took 1.01616119s to wait for apiserver process to appear ...
	I0210 13:41:19.353347  619525 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:41:19.353368  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:19.353967  619525 api_server.go:269] stopped: https://192.168.39.141:8443/healthz: Get "https://192.168.39.141:8443/healthz": dial tcp 192.168.39.141:8443: connect: connection refused
	I0210 13:41:19.854406  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:23.357582  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:41:23.357613  619525 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:41:23.357629  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:23.401970  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 13:41:23.402003  619525 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 13:41:23.853620  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:23.859633  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:41:23.859692  619525 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:41:24.354488  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:24.362815  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 13:41:24.362850  619525 api_server.go:103] status: https://192.168.39.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 13:41:24.854270  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:24.861453  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0210 13:41:24.873826  619525 api_server.go:141] control plane version: v1.24.4
	I0210 13:41:24.873883  619525 api_server.go:131] duration metric: took 5.520526757s to wait for apiserver health ...
	I0210 13:41:24.873897  619525 cni.go:84] Creating CNI manager for ""
	I0210 13:41:24.873907  619525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:41:24.875716  619525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 13:41:24.877057  619525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 13:41:24.894633  619525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 13:41:24.926049  619525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:41:24.930125  619525 system_pods.go:59] 7 kube-system pods found
	I0210 13:41:24.930204  619525 system_pods.go:61] "coredns-6d4b75cb6d-sfg2x" [5bfa5b83-6d2b-4cd7-8671-9734fca179ec] Running
	I0210 13:41:24.930223  619525 system_pods.go:61] "etcd-test-preload-233225" [fa79e508-a1b5-4903-89ff-4882a1f6da29] Running
	I0210 13:41:24.930231  619525 system_pods.go:61] "kube-apiserver-test-preload-233225" [887c1540-4105-4230-8388-0754c9d782f4] Running
	I0210 13:41:24.930242  619525 system_pods.go:61] "kube-controller-manager-test-preload-233225" [417ac7c7-c2aa-4bbb-a32c-2dbb55868d03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 13:41:24.930266  619525 system_pods.go:61] "kube-proxy-9qcbz" [49d35632-57cc-456d-bac7-5f978391473d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 13:41:24.930284  619525 system_pods.go:61] "kube-scheduler-test-preload-233225" [bf71d7aa-afd8-4111-b565-95c8cccc951d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 13:41:24.930292  619525 system_pods.go:61] "storage-provisioner" [95815649-9ff9-43d8-875a-89fc229d921f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 13:41:24.930313  619525 system_pods.go:74] duration metric: took 4.22447ms to wait for pod list to return data ...
	I0210 13:41:24.930328  619525 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:41:24.934657  619525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:41:24.934707  619525 node_conditions.go:123] node cpu capacity is 2
	I0210 13:41:24.934724  619525 node_conditions.go:105] duration metric: took 4.38966ms to run NodePressure ...
	I0210 13:41:24.934751  619525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 13:41:25.220965  619525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 13:41:25.226047  619525 kubeadm.go:739] kubelet initialised
	I0210 13:41:25.226080  619525 kubeadm.go:740] duration metric: took 5.080666ms waiting for restarted kubelet to initialise ...
	I0210 13:41:25.226092  619525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:41:25.230412  619525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:25.238195  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.238227  619525 pod_ready.go:82] duration metric: took 7.779761ms for pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:25.238248  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.238263  619525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:25.247314  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "etcd-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.247358  619525 pod_ready.go:82] duration metric: took 9.077466ms for pod "etcd-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:25.247373  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "etcd-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.247393  619525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:25.255088  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "kube-apiserver-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.255131  619525 pod_ready.go:82] duration metric: took 7.722206ms for pod "kube-apiserver-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:25.255145  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "kube-apiserver-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.255155  619525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:25.333150  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.333180  619525 pod_ready.go:82] duration metric: took 78.011578ms for pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:25.333192  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.333199  619525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9qcbz" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:25.730034  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "kube-proxy-9qcbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.730073  619525 pod_ready.go:82] duration metric: took 396.856787ms for pod "kube-proxy-9qcbz" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:25.730083  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "kube-proxy-9qcbz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:25.730090  619525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:26.129811  619525 pod_ready.go:98] node "test-preload-233225" hosting pod "kube-scheduler-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:26.129840  619525 pod_ready.go:82] duration metric: took 399.744054ms for pod "kube-scheduler-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	E0210 13:41:26.129851  619525 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-233225" hosting pod "kube-scheduler-test-preload-233225" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:26.129858  619525 pod_ready.go:39] duration metric: took 903.75357ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:41:26.129882  619525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 13:41:26.141897  619525 ops.go:34] apiserver oom_adj: -16
	I0210 13:41:26.141920  619525 kubeadm.go:597] duration metric: took 9.361775486s to restartPrimaryControlPlane
	I0210 13:41:26.141929  619525 kubeadm.go:394] duration metric: took 9.416884604s to StartCluster
	I0210 13:41:26.141955  619525 settings.go:142] acquiring lock: {Name:mk7daa7e5390489a50205707c4b69542e21eb74b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:41:26.142046  619525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:41:26.142777  619525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:41:26.143036  619525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:41:26.143121  619525 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 13:41:26.143228  619525 addons.go:69] Setting storage-provisioner=true in profile "test-preload-233225"
	I0210 13:41:26.143271  619525 addons.go:238] Setting addon storage-provisioner=true in "test-preload-233225"
	I0210 13:41:26.143244  619525 addons.go:69] Setting default-storageclass=true in profile "test-preload-233225"
	I0210 13:41:26.143298  619525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-233225"
	I0210 13:41:26.143317  619525 config.go:182] Loaded profile config "test-preload-233225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0210 13:41:26.143282  619525 addons.go:247] addon storage-provisioner should already be in state true
	I0210 13:41:26.143391  619525 host.go:66] Checking if "test-preload-233225" exists ...
	I0210 13:41:26.143633  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:41:26.143675  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:41:26.143760  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:41:26.143804  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:41:26.144780  619525 out.go:177] * Verifying Kubernetes components...
	I0210 13:41:26.146095  619525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:41:26.158595  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36975
	I0210 13:41:26.159108  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:41:26.159632  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:41:26.159654  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:41:26.159983  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:41:26.160186  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetState
	I0210 13:41:26.162847  619525 kapi.go:59] client config for test-preload-233225: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/client.crt", KeyFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/profiles/test-preload-233225/client.key", CAFile:"/home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 13:41:26.163269  619525 addons.go:238] Setting addon default-storageclass=true in "test-preload-233225"
	W0210 13:41:26.163299  619525 addons.go:247] addon default-storageclass should already be in state true
	I0210 13:41:26.163332  619525 host.go:66] Checking if "test-preload-233225" exists ...
	I0210 13:41:26.163488  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0210 13:41:26.163714  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:41:26.163758  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:41:26.163933  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:41:26.164439  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:41:26.164462  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:41:26.164819  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:41:26.165455  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:41:26.165504  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:41:26.179778  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0210 13:41:26.180112  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0210 13:41:26.180260  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:41:26.180627  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:41:26.180909  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:41:26.180939  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:41:26.181113  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:41:26.181137  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:41:26.181394  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:41:26.181471  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:41:26.181644  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetState
	I0210 13:41:26.181884  619525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:41:26.181932  619525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:41:26.183335  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:26.185289  619525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:41:26.186728  619525 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:41:26.186746  619525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 13:41:26.186760  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:26.189693  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:26.190117  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:26.190147  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:26.190323  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:26.190508  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:26.190662  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:26.190794  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:26.241195  619525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0210 13:41:26.241681  619525 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:41:26.242223  619525 main.go:141] libmachine: Using API Version  1
	I0210 13:41:26.242247  619525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:41:26.242662  619525 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:41:26.242917  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetState
	I0210 13:41:26.244752  619525 main.go:141] libmachine: (test-preload-233225) Calling .DriverName
	I0210 13:41:26.244974  619525 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 13:41:26.244989  619525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 13:41:26.245008  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHHostname
	I0210 13:41:26.247947  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:26.248376  619525 main.go:141] libmachine: (test-preload-233225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:14:7e", ip: ""} in network mk-test-preload-233225: {Iface:virbr1 ExpiryTime:2025-02-10 14:40:54 +0000 UTC Type:0 Mac:52:54:00:6f:14:7e Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:test-preload-233225 Clientid:01:52:54:00:6f:14:7e}
	I0210 13:41:26.248407  619525 main.go:141] libmachine: (test-preload-233225) DBG | domain test-preload-233225 has defined IP address 192.168.39.141 and MAC address 52:54:00:6f:14:7e in network mk-test-preload-233225
	I0210 13:41:26.248574  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHPort
	I0210 13:41:26.248748  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHKeyPath
	I0210 13:41:26.248908  619525 main.go:141] libmachine: (test-preload-233225) Calling .GetSSHUsername
	I0210 13:41:26.249046  619525 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/test-preload-233225/id_rsa Username:docker}
	I0210 13:41:26.311232  619525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:41:26.329510  619525 node_ready.go:35] waiting up to 6m0s for node "test-preload-233225" to be "Ready" ...
	I0210 13:41:26.390660  619525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 13:41:26.452343  619525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 13:41:27.342700  619525 main.go:141] libmachine: Making call to close driver server
	I0210 13:41:27.342723  619525 main.go:141] libmachine: (test-preload-233225) Calling .Close
	I0210 13:41:27.343087  619525 main.go:141] libmachine: (test-preload-233225) DBG | Closing plugin on server side
	I0210 13:41:27.343132  619525 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:41:27.343151  619525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:41:27.343166  619525 main.go:141] libmachine: Making call to close driver server
	I0210 13:41:27.343174  619525 main.go:141] libmachine: (test-preload-233225) Calling .Close
	I0210 13:41:27.343439  619525 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:41:27.343459  619525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:41:27.343499  619525 main.go:141] libmachine: (test-preload-233225) DBG | Closing plugin on server side
	I0210 13:41:27.352220  619525 main.go:141] libmachine: Making call to close driver server
	I0210 13:41:27.352242  619525 main.go:141] libmachine: (test-preload-233225) Calling .Close
	I0210 13:41:27.352531  619525 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:41:27.352551  619525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:41:27.383223  619525 main.go:141] libmachine: Making call to close driver server
	I0210 13:41:27.383258  619525 main.go:141] libmachine: (test-preload-233225) Calling .Close
	I0210 13:41:27.383588  619525 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:41:27.383609  619525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:41:27.383620  619525 main.go:141] libmachine: Making call to close driver server
	I0210 13:41:27.383628  619525 main.go:141] libmachine: (test-preload-233225) Calling .Close
	I0210 13:41:27.383852  619525 main.go:141] libmachine: Successfully made call to close driver server
	I0210 13:41:27.383868  619525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 13:41:27.383894  619525 main.go:141] libmachine: (test-preload-233225) DBG | Closing plugin on server side
	I0210 13:41:27.385963  619525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0210 13:41:27.387153  619525 addons.go:514] duration metric: took 1.244047654s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0210 13:41:28.333050  619525 node_ready.go:53] node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:30.333132  619525 node_ready.go:53] node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:32.334863  619525 node_ready.go:53] node "test-preload-233225" has status "Ready":"False"
	I0210 13:41:33.834120  619525 node_ready.go:49] node "test-preload-233225" has status "Ready":"True"
	I0210 13:41:33.834145  619525 node_ready.go:38] duration metric: took 7.504596463s for node "test-preload-233225" to be "Ready" ...
	I0210 13:41:33.834162  619525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:41:33.838298  619525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:33.841831  619525 pod_ready.go:93] pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:33.841852  619525 pod_ready.go:82] duration metric: took 3.532172ms for pod "coredns-6d4b75cb6d-sfg2x" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:33.841860  619525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:35.847604  619525 pod_ready.go:103] pod "etcd-test-preload-233225" in "kube-system" namespace has status "Ready":"False"
	I0210 13:41:36.349638  619525 pod_ready.go:93] pod "etcd-test-preload-233225" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:36.349678  619525 pod_ready.go:82] duration metric: took 2.507803417s for pod "etcd-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.349692  619525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.353545  619525 pod_ready.go:93] pod "kube-apiserver-test-preload-233225" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:36.353567  619525 pod_ready.go:82] duration metric: took 3.867194ms for pod "kube-apiserver-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.353576  619525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.359677  619525 pod_ready.go:93] pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:36.359696  619525 pod_ready.go:82] duration metric: took 6.11351ms for pod "kube-controller-manager-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.359705  619525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9qcbz" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.378654  619525 pod_ready.go:93] pod "kube-proxy-9qcbz" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:36.378679  619525 pod_ready.go:82] duration metric: took 18.967857ms for pod "kube-proxy-9qcbz" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:36.378689  619525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:37.034349  619525 pod_ready.go:93] pod "kube-scheduler-test-preload-233225" in "kube-system" namespace has status "Ready":"True"
	I0210 13:41:37.034379  619525 pod_ready.go:82] duration metric: took 655.684527ms for pod "kube-scheduler-test-preload-233225" in "kube-system" namespace to be "Ready" ...
	I0210 13:41:37.034392  619525 pod_ready.go:39] duration metric: took 3.200210551s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 13:41:37.034410  619525 api_server.go:52] waiting for apiserver process to appear ...
	I0210 13:41:37.034471  619525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:41:37.050101  619525 api_server.go:72] duration metric: took 10.907029584s to wait for apiserver process to appear ...
	I0210 13:41:37.050128  619525 api_server.go:88] waiting for apiserver healthz status ...
	I0210 13:41:37.050162  619525 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0210 13:41:37.054730  619525 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0210 13:41:37.055638  619525 api_server.go:141] control plane version: v1.24.4
	I0210 13:41:37.055658  619525 api_server.go:131] duration metric: took 5.524257ms to wait for apiserver health ...
	I0210 13:41:37.055665  619525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 13:41:37.235056  619525 system_pods.go:59] 7 kube-system pods found
	I0210 13:41:37.235087  619525 system_pods.go:61] "coredns-6d4b75cb6d-sfg2x" [5bfa5b83-6d2b-4cd7-8671-9734fca179ec] Running
	I0210 13:41:37.235092  619525 system_pods.go:61] "etcd-test-preload-233225" [fa79e508-a1b5-4903-89ff-4882a1f6da29] Running
	I0210 13:41:37.235095  619525 system_pods.go:61] "kube-apiserver-test-preload-233225" [887c1540-4105-4230-8388-0754c9d782f4] Running
	I0210 13:41:37.235099  619525 system_pods.go:61] "kube-controller-manager-test-preload-233225" [417ac7c7-c2aa-4bbb-a32c-2dbb55868d03] Running
	I0210 13:41:37.235102  619525 system_pods.go:61] "kube-proxy-9qcbz" [49d35632-57cc-456d-bac7-5f978391473d] Running
	I0210 13:41:37.235105  619525 system_pods.go:61] "kube-scheduler-test-preload-233225" [bf71d7aa-afd8-4111-b565-95c8cccc951d] Running
	I0210 13:41:37.235107  619525 system_pods.go:61] "storage-provisioner" [95815649-9ff9-43d8-875a-89fc229d921f] Running
	I0210 13:41:37.235119  619525 system_pods.go:74] duration metric: took 179.442846ms to wait for pod list to return data ...
	I0210 13:41:37.235127  619525 default_sa.go:34] waiting for default service account to be created ...
	I0210 13:41:37.434278  619525 default_sa.go:45] found service account: "default"
	I0210 13:41:37.434323  619525 default_sa.go:55] duration metric: took 199.177575ms for default service account to be created ...
	I0210 13:41:37.434333  619525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 13:41:37.634824  619525 system_pods.go:86] 7 kube-system pods found
	I0210 13:41:37.634859  619525 system_pods.go:89] "coredns-6d4b75cb6d-sfg2x" [5bfa5b83-6d2b-4cd7-8671-9734fca179ec] Running
	I0210 13:41:37.634865  619525 system_pods.go:89] "etcd-test-preload-233225" [fa79e508-a1b5-4903-89ff-4882a1f6da29] Running
	I0210 13:41:37.634869  619525 system_pods.go:89] "kube-apiserver-test-preload-233225" [887c1540-4105-4230-8388-0754c9d782f4] Running
	I0210 13:41:37.634876  619525 system_pods.go:89] "kube-controller-manager-test-preload-233225" [417ac7c7-c2aa-4bbb-a32c-2dbb55868d03] Running
	I0210 13:41:37.634882  619525 system_pods.go:89] "kube-proxy-9qcbz" [49d35632-57cc-456d-bac7-5f978391473d] Running
	I0210 13:41:37.634887  619525 system_pods.go:89] "kube-scheduler-test-preload-233225" [bf71d7aa-afd8-4111-b565-95c8cccc951d] Running
	I0210 13:41:37.634891  619525 system_pods.go:89] "storage-provisioner" [95815649-9ff9-43d8-875a-89fc229d921f] Running
	I0210 13:41:37.634897  619525 system_pods.go:126] duration metric: took 200.558559ms to wait for k8s-apps to be running ...
	I0210 13:41:37.634905  619525 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 13:41:37.634960  619525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:41:37.650870  619525 system_svc.go:56] duration metric: took 15.953746ms WaitForService to wait for kubelet
	I0210 13:41:37.650913  619525 kubeadm.go:582] duration metric: took 11.507846487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:41:37.650948  619525 node_conditions.go:102] verifying NodePressure condition ...
	I0210 13:41:37.834694  619525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 13:41:37.834730  619525 node_conditions.go:123] node cpu capacity is 2
	I0210 13:41:37.834745  619525 node_conditions.go:105] duration metric: took 183.785485ms to run NodePressure ...
	I0210 13:41:37.834762  619525 start.go:241] waiting for startup goroutines ...
	I0210 13:41:37.834773  619525 start.go:246] waiting for cluster config update ...
	I0210 13:41:37.834789  619525 start.go:255] writing updated cluster config ...
	I0210 13:41:37.835135  619525 ssh_runner.go:195] Run: rm -f paused
	I0210 13:41:37.883100  619525 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0210 13:41:37.885064  619525 out.go:201] 
	W0210 13:41:37.886566  619525 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0210 13:41:37.887824  619525 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0210 13:41:37.889137  619525 out.go:177] * Done! kubectl is now configured to use "test-preload-233225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.798516860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194898798497545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90ec4f10-fbee-4e37-9597-cfc7dbdb93b6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.799190001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9464cc01-4e47-4ee4-85ff-d810e4c523aa name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.799238978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9464cc01-4e47-4ee4-85ff-d810e4c523aa name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.800359113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a703d3f161c4d5dbc09824c0f10eeafe1c3bc4d944c97ddcdae195cbc40ef287,PodSandboxId:656d3d1c1fe9f4539f2acba1ff209ece5809cb5a34fbbcc6229b4ea73ba0a2d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739194892195659501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sfg2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bfa5b83-6d2b-4cd7-8671-9734fca179ec,},Annotations:map[string]string{io.kubernetes.container.hash: 998a7b8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9abd18940597899dea87a95f1ebb3697bb9fe112d78885f1586ceab727f3c9,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739194886442764817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739194885282435469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89942e5b2b4587b2b0f332711e53d31d3b84b2a3e03878f8ff06fe2992c9a448,PodSandboxId:a24d2774c0cc4d2b2478ad8799cb0e64d476672c87fce8dffdfcca695a0fe0d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739194884971967379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qcbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d35632-57cc-4
56d-bac7-5f978391473d,},Annotations:map[string]string{io.kubernetes.container.hash: 7974c7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41573ac394e5d237d8901b0f61b3ea777e042a82b33fa79e3d76cf29ca0ab90,PodSandboxId:a42ed9a35d98e2ba74df9fdc1203d97e471a13d78f2bcc0846f41d5c49b0a534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739194879051960002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b96681f8eb2240ca93bc05288e8f5a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: b57bc855,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fc0f0ac2cedfa34c8122c0faf38f6d0186a77832d20c24c8b74e1e1df46ef1,PodSandboxId:c9e2b0db77a1c0218bf8fa88e37662a7a362ee0b232c389c0a527f9bc8e1996e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739194879023404702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f042da22cc80148a76b3ea2216412503,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f93f3a6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a90d80d002ffa327ec2d0eb737c7e2e67fa8cfaa1f6e6badc2b9a689b4801,PodSandboxId:4d4cd74a69d2e2f5ada6ab35518c7f1c39e06b1f60f1df94d4f91654237cce85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739194879020619993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb540cb7f714409c305a6ae31f16249,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7093e034a402c5237c36251bb6ec016cde25290f3532e5c1cead6088ea4a29,PodSandboxId:306e378cf316c5809094ba7b054fc179a8ddf3ea1d3f3a13f814227befd2f25f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739194878939004105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf811a7457c553e038436aab3e1e282c,},Annotations:map[string]
string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9464cc01-4e47-4ee4-85ff-d810e4c523aa name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.842959476Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa0e12e0-a285-46d9-b2a0-402813f1a32e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.843031757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa0e12e0-a285-46d9-b2a0-402813f1a32e name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.844137548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37b3a738-6336-46e1-9d8a-8d906c90419b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.844561422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194898844539183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37b3a738-6336-46e1-9d8a-8d906c90419b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.845067936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f52e0469-3382-441c-9c0d-e8d90024e976 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.845116155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f52e0469-3382-441c-9c0d-e8d90024e976 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.845329680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a703d3f161c4d5dbc09824c0f10eeafe1c3bc4d944c97ddcdae195cbc40ef287,PodSandboxId:656d3d1c1fe9f4539f2acba1ff209ece5809cb5a34fbbcc6229b4ea73ba0a2d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739194892195659501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sfg2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bfa5b83-6d2b-4cd7-8671-9734fca179ec,},Annotations:map[string]string{io.kubernetes.container.hash: 998a7b8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9abd18940597899dea87a95f1ebb3697bb9fe112d78885f1586ceab727f3c9,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739194886442764817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739194885282435469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89942e5b2b4587b2b0f332711e53d31d3b84b2a3e03878f8ff06fe2992c9a448,PodSandboxId:a24d2774c0cc4d2b2478ad8799cb0e64d476672c87fce8dffdfcca695a0fe0d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739194884971967379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qcbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d35632-57cc-4
56d-bac7-5f978391473d,},Annotations:map[string]string{io.kubernetes.container.hash: 7974c7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41573ac394e5d237d8901b0f61b3ea777e042a82b33fa79e3d76cf29ca0ab90,PodSandboxId:a42ed9a35d98e2ba74df9fdc1203d97e471a13d78f2bcc0846f41d5c49b0a534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739194879051960002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b96681f8eb2240ca93bc05288e8f5a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: b57bc855,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fc0f0ac2cedfa34c8122c0faf38f6d0186a77832d20c24c8b74e1e1df46ef1,PodSandboxId:c9e2b0db77a1c0218bf8fa88e37662a7a362ee0b232c389c0a527f9bc8e1996e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739194879023404702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f042da22cc80148a76b3ea2216412503,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f93f3a6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a90d80d002ffa327ec2d0eb737c7e2e67fa8cfaa1f6e6badc2b9a689b4801,PodSandboxId:4d4cd74a69d2e2f5ada6ab35518c7f1c39e06b1f60f1df94d4f91654237cce85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739194879020619993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb540cb7f714409c305a6ae31f16249,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7093e034a402c5237c36251bb6ec016cde25290f3532e5c1cead6088ea4a29,PodSandboxId:306e378cf316c5809094ba7b054fc179a8ddf3ea1d3f3a13f814227befd2f25f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739194878939004105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf811a7457c553e038436aab3e1e282c,},Annotations:map[string]
string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f52e0469-3382-441c-9c0d-e8d90024e976 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.884240732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18d8ad4d-bd21-45d4-b6e0-b956c3ee0dbd name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.884304599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18d8ad4d-bd21-45d4-b6e0-b956c3ee0dbd name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.885562939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f66d3ba-f92e-4474-bf34-2ccf09728b6a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.886113978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194898886089791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f66d3ba-f92e-4474-bf34-2ccf09728b6a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.886662916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00bea14f-fae3-49fa-a947-c3098dbb5f36 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.886779411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00bea14f-fae3-49fa-a947-c3098dbb5f36 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.887040293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a703d3f161c4d5dbc09824c0f10eeafe1c3bc4d944c97ddcdae195cbc40ef287,PodSandboxId:656d3d1c1fe9f4539f2acba1ff209ece5809cb5a34fbbcc6229b4ea73ba0a2d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739194892195659501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sfg2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bfa5b83-6d2b-4cd7-8671-9734fca179ec,},Annotations:map[string]string{io.kubernetes.container.hash: 998a7b8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9abd18940597899dea87a95f1ebb3697bb9fe112d78885f1586ceab727f3c9,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739194886442764817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739194885282435469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89942e5b2b4587b2b0f332711e53d31d3b84b2a3e03878f8ff06fe2992c9a448,PodSandboxId:a24d2774c0cc4d2b2478ad8799cb0e64d476672c87fce8dffdfcca695a0fe0d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739194884971967379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qcbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d35632-57cc-4
56d-bac7-5f978391473d,},Annotations:map[string]string{io.kubernetes.container.hash: 7974c7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41573ac394e5d237d8901b0f61b3ea777e042a82b33fa79e3d76cf29ca0ab90,PodSandboxId:a42ed9a35d98e2ba74df9fdc1203d97e471a13d78f2bcc0846f41d5c49b0a534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739194879051960002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b96681f8eb2240ca93bc05288e8f5a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: b57bc855,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fc0f0ac2cedfa34c8122c0faf38f6d0186a77832d20c24c8b74e1e1df46ef1,PodSandboxId:c9e2b0db77a1c0218bf8fa88e37662a7a362ee0b232c389c0a527f9bc8e1996e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739194879023404702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f042da22cc80148a76b3ea2216412503,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f93f3a6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a90d80d002ffa327ec2d0eb737c7e2e67fa8cfaa1f6e6badc2b9a689b4801,PodSandboxId:4d4cd74a69d2e2f5ada6ab35518c7f1c39e06b1f60f1df94d4f91654237cce85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739194879020619993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb540cb7f714409c305a6ae31f16249,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7093e034a402c5237c36251bb6ec016cde25290f3532e5c1cead6088ea4a29,PodSandboxId:306e378cf316c5809094ba7b054fc179a8ddf3ea1d3f3a13f814227befd2f25f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739194878939004105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf811a7457c553e038436aab3e1e282c,},Annotations:map[string]
string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00bea14f-fae3-49fa-a947-c3098dbb5f36 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.920085000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57da2b9c-b533-4b08-a3ad-370bab58f051 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.920174186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57da2b9c-b533-4b08-a3ad-370bab58f051 name=/runtime.v1.RuntimeService/Version
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.921338846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69e7d736-bb13-4297-9ba1-2559de85acaa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.922009533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739194898921981861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69e7d736-bb13-4297-9ba1-2559de85acaa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.922610737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4efd085e-7e24-4401-a988-0632142d5f1b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.922800149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4efd085e-7e24-4401-a988-0632142d5f1b name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 13:41:38 test-preload-233225 crio[673]: time="2025-02-10 13:41:38.923047512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a703d3f161c4d5dbc09824c0f10eeafe1c3bc4d944c97ddcdae195cbc40ef287,PodSandboxId:656d3d1c1fe9f4539f2acba1ff209ece5809cb5a34fbbcc6229b4ea73ba0a2d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739194892195659501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-sfg2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bfa5b83-6d2b-4cd7-8671-9734fca179ec,},Annotations:map[string]string{io.kubernetes.container.hash: 998a7b8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9abd18940597899dea87a95f1ebb3697bb9fe112d78885f1586ceab727f3c9,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739194886442764817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2,PodSandboxId:d08dd59d99678bbb25418b88dd5e329f8308153d3c879acff77e796bee4c3dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739194885282435469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 95815649-9ff9-43d8-875a-89fc229d921f,},Annotations:map[string]string{io.kubernetes.container.hash: dec5be07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89942e5b2b4587b2b0f332711e53d31d3b84b2a3e03878f8ff06fe2992c9a448,PodSandboxId:a24d2774c0cc4d2b2478ad8799cb0e64d476672c87fce8dffdfcca695a0fe0d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739194884971967379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qcbz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d35632-57cc-4
56d-bac7-5f978391473d,},Annotations:map[string]string{io.kubernetes.container.hash: 7974c7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41573ac394e5d237d8901b0f61b3ea777e042a82b33fa79e3d76cf29ca0ab90,PodSandboxId:a42ed9a35d98e2ba74df9fdc1203d97e471a13d78f2bcc0846f41d5c49b0a534,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739194879051960002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b96681f8eb2240ca93bc05288e8f5a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: b57bc855,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fc0f0ac2cedfa34c8122c0faf38f6d0186a77832d20c24c8b74e1e1df46ef1,PodSandboxId:c9e2b0db77a1c0218bf8fa88e37662a7a362ee0b232c389c0a527f9bc8e1996e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739194879023404702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f042da22cc80148a76b3ea2216412503,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f93f3a6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a90d80d002ffa327ec2d0eb737c7e2e67fa8cfaa1f6e6badc2b9a689b4801,PodSandboxId:4d4cd74a69d2e2f5ada6ab35518c7f1c39e06b1f60f1df94d4f91654237cce85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739194879020619993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bb540cb7f714409c305a6ae31f16249,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7093e034a402c5237c36251bb6ec016cde25290f3532e5c1cead6088ea4a29,PodSandboxId:306e378cf316c5809094ba7b054fc179a8ddf3ea1d3f3a13f814227befd2f25f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739194878939004105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-233225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf811a7457c553e038436aab3e1e282c,},Annotations:map[string]
string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4efd085e-7e24-4401-a988-0632142d5f1b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a703d3f161c4d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   656d3d1c1fe9f       coredns-6d4b75cb6d-sfg2x
	6e9abd1894059       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       2                   d08dd59d99678       storage-provisioner
	be599b4d82a63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       1                   d08dd59d99678       storage-provisioner
	89942e5b2b458       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   a24d2774c0cc4       kube-proxy-9qcbz
	c41573ac394e5       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   a42ed9a35d98e       etcd-test-preload-233225
	71fc0f0ac2ced       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   c9e2b0db77a1c       kube-apiserver-test-preload-233225
	1f8a90d80d002       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   4d4cd74a69d2e       kube-controller-manager-test-preload-233225
	5d7093e034a40       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   306e378cf316c       kube-scheduler-test-preload-233225
	
	
	==> coredns [a703d3f161c4d5dbc09824c0f10eeafe1c3bc4d944c97ddcdae195cbc40ef287] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50923 - 59772 "HINFO IN 7232910274541221838.1344041985208667167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021775509s
	
	
	==> describe nodes <==
	Name:               test-preload-233225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-233225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=test-preload-233225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T13_39_54_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 13:39:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-233225
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 13:41:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 13:41:33 +0000   Mon, 10 Feb 2025 13:39:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 13:41:33 +0000   Mon, 10 Feb 2025 13:39:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 13:41:33 +0000   Mon, 10 Feb 2025 13:39:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 13:41:33 +0000   Mon, 10 Feb 2025 13:41:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    test-preload-233225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6be11342f5bf419c86fa72aaed324540
	  System UUID:                6be11342-f5bf-419c-86fa-72aaed324540
	  Boot ID:                    e7bce231-af45-480f-a6e1-9609a939b12e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-sfg2x                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     92s
	  kube-system                 etcd-test-preload-233225                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-233225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-test-preload-233225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-9qcbz                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-test-preload-233225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 105s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s               kubelet          Node test-preload-233225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s               kubelet          Node test-preload-233225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s               kubelet          Node test-preload-233225 status is now: NodeHasSufficientPID
	  Normal  NodeReady                95s                kubelet          Node test-preload-233225 status is now: NodeReady
	  Normal  RegisteredNode           93s                node-controller  Node test-preload-233225 event: Registered Node test-preload-233225 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-233225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-233225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-233225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-233225 event: Registered Node test-preload-233225 in Controller
	
	
	==> dmesg <==
	[Feb10 13:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052296] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041061] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969292] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.766483] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.578347] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 13:41] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.054981] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057957] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.189273] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.108131] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.267509] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[ +13.099860] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.057596] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.805161] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +6.867443] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.301188] systemd-fstab-generator[1782]: Ignoring "noauto" option for root device
	[  +5.789309] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [c41573ac394e5d237d8901b0f61b3ea777e042a82b33fa79e3d76cf29ca0ab90] <==
	{"level":"info","ts":"2025-02-10T13:41:19.383Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"2398e045949c73cb","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-10T13:41:19.388Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-10T13:41:19.390Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-10T13:41:19.395Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T13:41:19.395Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-10T13:41:19.396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb switched to configuration voters=(2565046577238143947)"}
	{"level":"info","ts":"2025-02-10T13:41:19.396Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","added-peer-id":"2398e045949c73cb","added-peer-peer-urls":["https://192.168.39.141:2380"]}
	{"level":"info","ts":"2025-02-10T13:41:19.397Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:41:19.398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T13:41:19.405Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2025-02-10T13:41:19.408Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2025-02-10T13:41:20.930Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:test-preload-233225 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T13:41:20.931Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:41:20.933Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.141:2379"}
	{"level":"info","ts":"2025-02-10T13:41:20.933Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T13:41:20.934Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T13:41:20.934Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T13:41:20.934Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:41:39 up 0 min,  0 users,  load average: 0.94, 0.26, 0.09
	Linux test-preload-233225 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71fc0f0ac2cedfa34c8122c0faf38f6d0186a77832d20c24c8b74e1e1df46ef1] <==
	I0210 13:41:23.345751       1 controller.go:85] Starting OpenAPI V3 controller
	I0210 13:41:23.345789       1 naming_controller.go:291] Starting NamingConditionController
	I0210 13:41:23.346095       1 establishing_controller.go:76] Starting EstablishingController
	I0210 13:41:23.346375       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0210 13:41:23.346417       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 13:41:23.346454       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0210 13:41:23.431332       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0210 13:41:23.431622       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0210 13:41:23.437102       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0210 13:41:23.437460       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 13:41:23.439480       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0210 13:41:23.508551       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0210 13:41:23.509084       1 cache.go:39] Caches are synced for autoregister controller
	I0210 13:41:23.509871       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0210 13:41:23.510429       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 13:41:24.009692       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0210 13:41:24.315859       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 13:41:25.090405       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0210 13:41:25.109003       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0210 13:41:25.163630       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0210 13:41:25.190431       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 13:41:25.205677       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 13:41:25.500260       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0210 13:41:36.354669       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 13:41:36.404462       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1f8a90d80d002ffa327ec2d0eb737c7e2e67fa8cfaa1f6e6badc2b9a689b4801] <==
	I0210 13:41:36.241031       1 shared_informer.go:262] Caches are synced for persistent volume
	I0210 13:41:36.242542       1 shared_informer.go:262] Caches are synced for PV protection
	I0210 13:41:36.244987       1 shared_informer.go:262] Caches are synced for crt configmap
	I0210 13:41:36.246223       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0210 13:41:36.252068       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0210 13:41:36.252137       1 shared_informer.go:262] Caches are synced for taint
	I0210 13:41:36.252218       1 shared_informer.go:262] Caches are synced for job
	I0210 13:41:36.252273       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0210 13:41:36.252354       1 event.go:294] "Event occurred" object="test-preload-233225" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-233225 event: Registered Node test-preload-233225 in Controller"
	I0210 13:41:36.252781       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0210 13:41:36.252277       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0210 13:41:36.254913       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-233225. Assuming now as a timestamp.
	I0210 13:41:36.254974       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0210 13:41:36.253307       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0210 13:41:36.253328       1 shared_informer.go:262] Caches are synced for daemon sets
	I0210 13:41:36.270182       1 shared_informer.go:262] Caches are synced for ephemeral
	I0210 13:41:36.282974       1 shared_informer.go:262] Caches are synced for disruption
	I0210 13:41:36.283057       1 disruption.go:371] Sending events to api server.
	I0210 13:41:36.386471       1 shared_informer.go:262] Caches are synced for cronjob
	I0210 13:41:36.396221       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 13:41:36.405923       1 shared_informer.go:262] Caches are synced for resource quota
	I0210 13:41:36.428836       1 shared_informer.go:262] Caches are synced for HPA
	I0210 13:41:36.816689       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 13:41:36.817917       1 shared_informer.go:262] Caches are synced for garbage collector
	I0210 13:41:36.817947       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [89942e5b2b4587b2b0f332711e53d31d3b84b2a3e03878f8ff06fe2992c9a448] <==
	I0210 13:41:25.424030       1 node.go:163] Successfully retrieved node IP: 192.168.39.141
	I0210 13:41:25.424286       1 server_others.go:138] "Detected node IP" address="192.168.39.141"
	I0210 13:41:25.424408       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0210 13:41:25.484599       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0210 13:41:25.484632       1 server_others.go:206] "Using iptables Proxier"
	I0210 13:41:25.485169       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0210 13:41:25.487424       1 server.go:661] "Version info" version="v1.24.4"
	I0210 13:41:25.487528       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 13:41:25.495251       1 config.go:317] "Starting service config controller"
	I0210 13:41:25.495788       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0210 13:41:25.495843       1 config.go:226] "Starting endpoint slice config controller"
	I0210 13:41:25.495866       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0210 13:41:25.497516       1 config.go:444] "Starting node config controller"
	I0210 13:41:25.497541       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0210 13:41:25.596230       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0210 13:41:25.596374       1 shared_informer.go:262] Caches are synced for service config
	I0210 13:41:25.598117       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5d7093e034a402c5237c36251bb6ec016cde25290f3532e5c1cead6088ea4a29] <==
	I0210 13:41:20.354187       1 serving.go:348] Generated self-signed cert in-memory
	W0210 13:41:23.356097       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 13:41:23.356544       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 13:41:23.356576       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 13:41:23.356870       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 13:41:23.413952       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0210 13:41:23.413991       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 13:41:23.424624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 13:41:23.424883       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 13:41:23.428475       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0210 13:41:23.428576       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0210 13:41:23.438925       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0210 13:41:23.439001       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0210 13:41:24.726269       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 13:41:23 test-preload-233225 kubelet[1131]: I0210 13:41:23.474609    1131 setters.go:532] "Node became not ready" node="test-preload-233225" condition={Type:Ready Status:False LastHeartbeatTime:2025-02-10 13:41:23.474536727 +0000 UTC m=+5.366098584 LastTransitionTime:2025-02-10 13:41:23.474536727 +0000 UTC m=+5.366098584 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.226234    1131 apiserver.go:52] "Watching apiserver"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.230686    1131 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.230848    1131 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.230887    1131 topology_manager.go:200] "Topology Admit Handler"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: E0210 13:41:24.232701    1131 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-sfg2x" podUID=5bfa5b83-6d2b-4cd7-8671-9734fca179ec
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.290654    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d35632-57cc-456d-bac7-5f978391473d-lib-modules\") pod \"kube-proxy-9qcbz\" (UID: \"49d35632-57cc-456d-bac7-5f978391473d\") " pod="kube-system/kube-proxy-9qcbz"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291095    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49d35632-57cc-456d-bac7-5f978391473d-kube-proxy\") pod \"kube-proxy-9qcbz\" (UID: \"49d35632-57cc-456d-bac7-5f978391473d\") " pod="kube-system/kube-proxy-9qcbz"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291183    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8g6b\" (UniqueName: \"kubernetes.io/projected/95815649-9ff9-43d8-875a-89fc229d921f-kube-api-access-m8g6b\") pod \"storage-provisioner\" (UID: \"95815649-9ff9-43d8-875a-89fc229d921f\") " pod="kube-system/storage-provisioner"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291382    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8wlp\" (UniqueName: \"kubernetes.io/projected/49d35632-57cc-456d-bac7-5f978391473d-kube-api-access-d8wlp\") pod \"kube-proxy-9qcbz\" (UID: \"49d35632-57cc-456d-bac7-5f978391473d\") " pod="kube-system/kube-proxy-9qcbz"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291452    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/95815649-9ff9-43d8-875a-89fc229d921f-tmp\") pod \"storage-provisioner\" (UID: \"95815649-9ff9-43d8-875a-89fc229d921f\") " pod="kube-system/storage-provisioner"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291597    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume\") pod \"coredns-6d4b75cb6d-sfg2x\" (UID: \"5bfa5b83-6d2b-4cd7-8671-9734fca179ec\") " pod="kube-system/coredns-6d4b75cb6d-sfg2x"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291648    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkjf\" (UniqueName: \"kubernetes.io/projected/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-kube-api-access-dzkjf\") pod \"coredns-6d4b75cb6d-sfg2x\" (UID: \"5bfa5b83-6d2b-4cd7-8671-9734fca179ec\") " pod="kube-system/coredns-6d4b75cb6d-sfg2x"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291668    1131 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49d35632-57cc-456d-bac7-5f978391473d-xtables-lock\") pod \"kube-proxy-9qcbz\" (UID: \"49d35632-57cc-456d-bac7-5f978391473d\") " pod="kube-system/kube-proxy-9qcbz"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: I0210 13:41:24.291686    1131 reconciler.go:159] "Reconciler: start to sync state"
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: E0210 13:41:24.396396    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: E0210 13:41:24.396505    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume podName:5bfa5b83-6d2b-4cd7-8671-9734fca179ec nodeName:}" failed. No retries permitted until 2025-02-10 13:41:24.896473703 +0000 UTC m=+6.788035574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume") pod "coredns-6d4b75cb6d-sfg2x" (UID: "5bfa5b83-6d2b-4cd7-8671-9734fca179ec") : object "kube-system"/"coredns" not registered
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: E0210 13:41:24.899557    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:41:24 test-preload-233225 kubelet[1131]: E0210 13:41:24.899624    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume podName:5bfa5b83-6d2b-4cd7-8671-9734fca179ec nodeName:}" failed. No retries permitted until 2025-02-10 13:41:25.899609169 +0000 UTC m=+7.791171026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume") pod "coredns-6d4b75cb6d-sfg2x" (UID: "5bfa5b83-6d2b-4cd7-8671-9734fca179ec") : object "kube-system"/"coredns" not registered
	Feb 10 13:41:25 test-preload-233225 kubelet[1131]: E0210 13:41:25.908890    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:41:25 test-preload-233225 kubelet[1131]: E0210 13:41:25.909037    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume podName:5bfa5b83-6d2b-4cd7-8671-9734fca179ec nodeName:}" failed. No retries permitted until 2025-02-10 13:41:27.90898709 +0000 UTC m=+9.800548948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume") pod "coredns-6d4b75cb6d-sfg2x" (UID: "5bfa5b83-6d2b-4cd7-8671-9734fca179ec") : object "kube-system"/"coredns" not registered
	Feb 10 13:41:26 test-preload-233225 kubelet[1131]: E0210 13:41:26.360106    1131 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-sfg2x" podUID=5bfa5b83-6d2b-4cd7-8671-9734fca179ec
	Feb 10 13:41:26 test-preload-233225 kubelet[1131]: I0210 13:41:26.414909    1131 scope.go:110] "RemoveContainer" containerID="be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2"
	Feb 10 13:41:27 test-preload-233225 kubelet[1131]: E0210 13:41:27.924946    1131 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 10 13:41:27 test-preload-233225 kubelet[1131]: E0210 13:41:27.925050    1131 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume podName:5bfa5b83-6d2b-4cd7-8671-9734fca179ec nodeName:}" failed. No retries permitted until 2025-02-10 13:41:31.925032065 +0000 UTC m=+13.816593934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5bfa5b83-6d2b-4cd7-8671-9734fca179ec-config-volume") pod "coredns-6d4b75cb6d-sfg2x" (UID: "5bfa5b83-6d2b-4cd7-8671-9734fca179ec") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [6e9abd18940597899dea87a95f1ebb3697bb9fe112d78885f1586ceab727f3c9] <==
	I0210 13:41:26.569332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 13:41:26.603135       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 13:41:26.603204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [be599b4d82a63fb937b1ec0dc514c2588f61e8d7851668c34f823f05572f9ed2] <==
	I0210 13:41:25.422557       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0210 13:41:25.445902       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-233225 -n test-preload-233225
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-233225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-233225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-233225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-233225: (1.150889489s)
--- FAIL: TestPreload (178.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m8.565324039s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-935801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-935801" primary control-plane node in "kubernetes-upgrade-935801" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:46:27.372956  625844 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:46:27.373118  625844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:46:27.373129  625844 out.go:358] Setting ErrFile to fd 2...
	I0210 13:46:27.373135  625844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:46:27.373313  625844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:46:27.373880  625844 out.go:352] Setting JSON to false
	I0210 13:46:27.374881  625844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12532,"bootTime":1739182655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:46:27.374998  625844 start.go:139] virtualization: kvm guest
	I0210 13:46:27.377009  625844 out.go:177] * [kubernetes-upgrade-935801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:46:27.378274  625844 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:46:27.378278  625844 notify.go:220] Checking for updates...
	I0210 13:46:27.379568  625844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:46:27.380830  625844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:46:27.381946  625844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:46:27.383048  625844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:46:27.384069  625844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:46:27.385547  625844 config.go:182] Loaded profile config "NoKubernetes-013011": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0210 13:46:27.385635  625844 config.go:182] Loaded profile config "cert-expiration-959248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:46:27.385715  625844 config.go:182] Loaded profile config "running-upgrade-115286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0210 13:46:27.385790  625844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:46:27.421836  625844 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:46:27.423029  625844 start.go:297] selected driver: kvm2
	I0210 13:46:27.423041  625844 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:46:27.423054  625844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:46:27.423713  625844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:46:27.423806  625844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:46:27.439220  625844 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:46:27.439266  625844 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 13:46:27.439567  625844 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 13:46:27.439607  625844 cni.go:84] Creating CNI manager for ""
	I0210 13:46:27.439669  625844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:46:27.439685  625844 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 13:46:27.439745  625844 start.go:340] cluster config:
	{Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:46:27.439882  625844 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:46:27.441451  625844 out.go:177] * Starting "kubernetes-upgrade-935801" primary control-plane node in "kubernetes-upgrade-935801" cluster
	I0210 13:46:27.442631  625844 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:46:27.442671  625844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 13:46:27.442683  625844 cache.go:56] Caching tarball of preloaded images
	I0210 13:46:27.442777  625844 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:46:27.442791  625844 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 13:46:27.442952  625844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/config.json ...
	I0210 13:46:27.442985  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/config.json: {Name:mk89e82019a40dcbd0c58b9669b2241abea48289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:46:27.443154  625844 start.go:360] acquireMachinesLock for kubernetes-upgrade-935801: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:47:05.213326  625844 start.go:364] duration metric: took 37.770127212s to acquireMachinesLock for "kubernetes-upgrade-935801"
	I0210 13:47:05.213411  625844 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:47:05.213562  625844 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 13:47:05.216400  625844 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 13:47:05.216629  625844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:47:05.216701  625844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:47:05.235305  625844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0210 13:47:05.235730  625844 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:47:05.236364  625844 main.go:141] libmachine: Using API Version  1
	I0210 13:47:05.236387  625844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:47:05.236689  625844 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:47:05.236904  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:47:05.237048  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:05.237224  625844 start.go:159] libmachine.API.Create for "kubernetes-upgrade-935801" (driver="kvm2")
	I0210 13:47:05.237256  625844 client.go:168] LocalClient.Create starting
	I0210 13:47:05.237288  625844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem
	I0210 13:47:05.237325  625844 main.go:141] libmachine: Decoding PEM data...
	I0210 13:47:05.237359  625844 main.go:141] libmachine: Parsing certificate...
	I0210 13:47:05.237439  625844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem
	I0210 13:47:05.237471  625844 main.go:141] libmachine: Decoding PEM data...
	I0210 13:47:05.237493  625844 main.go:141] libmachine: Parsing certificate...
	I0210 13:47:05.237543  625844 main.go:141] libmachine: Running pre-create checks...
	I0210 13:47:05.237552  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .PreCreateCheck
	I0210 13:47:05.237885  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetConfigRaw
	I0210 13:47:05.238374  625844 main.go:141] libmachine: Creating machine...
	I0210 13:47:05.238393  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .Create
	I0210 13:47:05.238517  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) creating KVM machine...
	I0210 13:47:05.238537  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) creating network...
	I0210 13:47:05.239597  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found existing default KVM network
	I0210 13:47:05.240936  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.240796  626233 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:8d:21} reservation:<nil>}
	I0210 13:47:05.242031  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.241962  626233 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:71:7a} reservation:<nil>}
	I0210 13:47:05.243066  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.242986  626233 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:d0:6c} reservation:<nil>}
	I0210 13:47:05.244190  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.244112  626233 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285fd0}
	I0210 13:47:05.244214  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | created network xml: 
	I0210 13:47:05.244226  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | <network>
	I0210 13:47:05.244234  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   <name>mk-kubernetes-upgrade-935801</name>
	I0210 13:47:05.244253  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   <dns enable='no'/>
	I0210 13:47:05.244262  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   
	I0210 13:47:05.244301  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0210 13:47:05.244319  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |     <dhcp>
	I0210 13:47:05.244345  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0210 13:47:05.244357  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |     </dhcp>
	I0210 13:47:05.244369  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   </ip>
	I0210 13:47:05.244380  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG |   
	I0210 13:47:05.244407  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | </network>
	I0210 13:47:05.244429  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | 
	I0210 13:47:05.250170  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | trying to create private KVM network mk-kubernetes-upgrade-935801 192.168.72.0/24...
	I0210 13:47:05.323850  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting up store path in /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801 ...
	I0210 13:47:05.323902  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | private KVM network mk-kubernetes-upgrade-935801 192.168.72.0/24 created
	I0210 13:47:05.323914  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) building disk image from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 13:47:05.323936  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.323769  626233 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:47:05.323956  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Downloading /home/jenkins/minikube-integration/20390-580861/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 13:47:05.626381  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.626237  626233 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa...
	I0210 13:47:05.736742  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.736597  626233 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/kubernetes-upgrade-935801.rawdisk...
	I0210 13:47:05.736778  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | Writing magic tar header
	I0210 13:47:05.736794  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | Writing SSH key tar header
	I0210 13:47:05.736806  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:05.736718  626233 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801 ...
	I0210 13:47:05.736822  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801
	I0210 13:47:05.736842  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801 (perms=drwx------)
	I0210 13:47:05.736859  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines
	I0210 13:47:05.736876  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines (perms=drwxr-xr-x)
	I0210 13:47:05.736890  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:47:05.736907  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861
	I0210 13:47:05.736920  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 13:47:05.736932  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube (perms=drwxr-xr-x)
	I0210 13:47:05.736952  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home/jenkins
	I0210 13:47:05.736966  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins/minikube-integration/20390-580861 (perms=drwxrwxr-x)
	I0210 13:47:05.736985  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 13:47:05.736998  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 13:47:05.737011  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) creating domain...
	I0210 13:47:05.737023  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | checking permissions on dir: /home
	I0210 13:47:05.737035  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | skipping /home - not owner
	I0210 13:47:05.738216  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) define libvirt domain using xml: 
	I0210 13:47:05.738241  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) <domain type='kvm'>
	I0210 13:47:05.738248  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <name>kubernetes-upgrade-935801</name>
	I0210 13:47:05.738259  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <memory unit='MiB'>2200</memory>
	I0210 13:47:05.738268  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <vcpu>2</vcpu>
	I0210 13:47:05.738277  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <features>
	I0210 13:47:05.738292  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <acpi/>
	I0210 13:47:05.738297  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <apic/>
	I0210 13:47:05.738302  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <pae/>
	I0210 13:47:05.738307  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     
	I0210 13:47:05.738317  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   </features>
	I0210 13:47:05.738325  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <cpu mode='host-passthrough'>
	I0210 13:47:05.738329  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   
	I0210 13:47:05.738333  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   </cpu>
	I0210 13:47:05.738341  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <os>
	I0210 13:47:05.738349  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <type>hvm</type>
	I0210 13:47:05.738361  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <boot dev='cdrom'/>
	I0210 13:47:05.738371  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <boot dev='hd'/>
	I0210 13:47:05.738383  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <bootmenu enable='no'/>
	I0210 13:47:05.738389  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   </os>
	I0210 13:47:05.738397  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   <devices>
	I0210 13:47:05.738404  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <disk type='file' device='cdrom'>
	I0210 13:47:05.738429  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/boot2docker.iso'/>
	I0210 13:47:05.738438  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <target dev='hdc' bus='scsi'/>
	I0210 13:47:05.738443  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <readonly/>
	I0210 13:47:05.738447  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </disk>
	I0210 13:47:05.738453  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <disk type='file' device='disk'>
	I0210 13:47:05.738458  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 13:47:05.738472  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/kubernetes-upgrade-935801.rawdisk'/>
	I0210 13:47:05.738479  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <target dev='hda' bus='virtio'/>
	I0210 13:47:05.738487  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </disk>
	I0210 13:47:05.738494  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <interface type='network'>
	I0210 13:47:05.738504  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <source network='mk-kubernetes-upgrade-935801'/>
	I0210 13:47:05.738511  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <model type='virtio'/>
	I0210 13:47:05.738519  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </interface>
	I0210 13:47:05.738526  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <interface type='network'>
	I0210 13:47:05.738538  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <source network='default'/>
	I0210 13:47:05.738545  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <model type='virtio'/>
	I0210 13:47:05.738552  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </interface>
	I0210 13:47:05.738559  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <serial type='pty'>
	I0210 13:47:05.738567  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <target port='0'/>
	I0210 13:47:05.738573  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </serial>
	I0210 13:47:05.738582  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <console type='pty'>
	I0210 13:47:05.738590  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <target type='serial' port='0'/>
	I0210 13:47:05.738597  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </console>
	I0210 13:47:05.738603  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     <rng model='virtio'>
	I0210 13:47:05.738612  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)       <backend model='random'>/dev/random</backend>
	I0210 13:47:05.738618  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     </rng>
	I0210 13:47:05.738625  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     
	I0210 13:47:05.738631  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)     
	I0210 13:47:05.738639  625844 main.go:141] libmachine: (kubernetes-upgrade-935801)   </devices>
	I0210 13:47:05.738646  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) </domain>
	I0210 13:47:05.738660  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) 
	I0210 13:47:05.745451  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:98:5a:7d in network default
	I0210 13:47:05.745993  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) starting domain...
	I0210 13:47:05.746018  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:05.746040  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) ensuring networks are active...
	I0210 13:47:05.746798  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Ensuring network default is active
	I0210 13:47:05.747151  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Ensuring network mk-kubernetes-upgrade-935801 is active
	I0210 13:47:05.747812  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) getting domain XML...
	I0210 13:47:05.748555  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) creating domain...
	I0210 13:47:07.116037  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) waiting for IP...
	I0210 13:47:07.116991  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.117488  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.117538  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:07.117462  626233 retry.go:31] will retry after 236.737173ms: waiting for domain to come up
	I0210 13:47:07.361342  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.361779  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.361817  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:07.361787  626233 retry.go:31] will retry after 293.210385ms: waiting for domain to come up
	I0210 13:47:07.656127  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.656708  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:07.656736  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:07.656672  626233 retry.go:31] will retry after 375.533755ms: waiting for domain to come up
	I0210 13:47:08.034408  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:08.034883  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:08.034913  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:08.034848  626233 retry.go:31] will retry after 598.698866ms: waiting for domain to come up
	I0210 13:47:08.634613  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:08.634999  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:08.635030  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:08.634969  626233 retry.go:31] will retry after 595.528974ms: waiting for domain to come up
	I0210 13:47:09.231722  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:09.232198  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:09.232230  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:09.232154  626233 retry.go:31] will retry after 792.687707ms: waiting for domain to come up
	I0210 13:47:10.026226  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:10.026797  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:10.026826  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:10.026768  626233 retry.go:31] will retry after 835.759814ms: waiting for domain to come up
	I0210 13:47:10.864087  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:10.864506  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:10.864552  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:10.864499  626233 retry.go:31] will retry after 1.131519721s: waiting for domain to come up
	I0210 13:47:11.997146  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:11.997604  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:11.997632  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:11.997578  626233 retry.go:31] will retry after 1.848304328s: waiting for domain to come up
	I0210 13:47:13.847132  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:13.847568  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:13.847600  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:13.847535  626233 retry.go:31] will retry after 1.701198894s: waiting for domain to come up
	I0210 13:47:15.550861  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:15.551323  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:15.551370  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:15.551263  626233 retry.go:31] will retry after 1.845371029s: waiting for domain to come up
	I0210 13:47:17.397735  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:17.398237  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:17.398290  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:17.398205  626233 retry.go:31] will retry after 2.650111752s: waiting for domain to come up
	I0210 13:47:20.049729  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:20.050328  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:20.050410  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:20.050326  626233 retry.go:31] will retry after 3.249838356s: waiting for domain to come up
	I0210 13:47:23.303827  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:23.304211  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find current IP address of domain kubernetes-upgrade-935801 in network mk-kubernetes-upgrade-935801
	I0210 13:47:23.304241  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | I0210 13:47:23.304190  626233 retry.go:31] will retry after 4.129418156s: waiting for domain to come up
	I0210 13:47:27.434859  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.435324  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) found domain IP: 192.168.72.152
	I0210 13:47:27.435348  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) reserving static IP address...
	I0210 13:47:27.435381  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has current primary IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.435765  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-935801", mac: "52:54:00:bc:bd:cd", ip: "192.168.72.152"} in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.512868  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | Getting to WaitForSSH function...
	I0210 13:47:27.512907  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) reserved static IP address 192.168.72.152 for domain kubernetes-upgrade-935801
	I0210 13:47:27.512921  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) waiting for SSH...
	I0210 13:47:27.515559  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.515959  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:27.515988  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.516195  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | Using SSH client type: external
	I0210 13:47:27.516223  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa (-rw-------)
	I0210 13:47:27.516265  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:47:27.516297  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | About to run SSH command:
	I0210 13:47:27.516314  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | exit 0
	I0210 13:47:27.644347  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | SSH cmd err, output: <nil>: 
	I0210 13:47:27.644658  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) KVM machine creation complete
	I0210 13:47:27.645007  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetConfigRaw
	I0210 13:47:27.645603  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:27.645836  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:27.645981  625844 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 13:47:27.646024  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetState
	I0210 13:47:27.647423  625844 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 13:47:27.647437  625844 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 13:47:27.647442  625844 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 13:47:27.647448  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:27.649731  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.650109  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:27.650137  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.650245  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:27.650435  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.650588  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.650718  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:27.650884  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:27.651114  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:27.651126  625844 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 13:47:27.759709  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:47:27.759738  625844 main.go:141] libmachine: Detecting the provisioner...
	I0210 13:47:27.759749  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:27.762773  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.763256  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:27.763287  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.763517  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:27.763753  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.763922  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.764116  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:27.764315  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:27.764514  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:27.764529  625844 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 13:47:27.873371  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 13:47:27.873458  625844 main.go:141] libmachine: found compatible host: buildroot
	I0210 13:47:27.873467  625844 main.go:141] libmachine: Provisioning with buildroot...
	I0210 13:47:27.873476  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:47:27.873751  625844 buildroot.go:166] provisioning hostname "kubernetes-upgrade-935801"
	I0210 13:47:27.873787  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:47:27.873998  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:27.876571  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.876895  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:27.876930  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:27.877097  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:27.877283  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.877484  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:27.877597  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:27.877760  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:27.878000  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:27.878020  625844 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-935801 && echo "kubernetes-upgrade-935801" | sudo tee /etc/hostname
	I0210 13:47:27.998828  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-935801
	
	I0210 13:47:27.998873  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.001758  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.002204  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.002244  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.002425  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:28.002605  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.002764  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.002897  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:28.003065  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:28.003306  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:28.003323  625844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-935801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-935801/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-935801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:47:28.121987  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:47:28.122037  625844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:47:28.122060  625844 buildroot.go:174] setting up certificates
	I0210 13:47:28.122071  625844 provision.go:84] configureAuth start
	I0210 13:47:28.122082  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:47:28.122362  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:47:28.124905  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.125274  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.125307  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.125423  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.127934  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.128256  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.128303  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.128432  625844 provision.go:143] copyHostCerts
	I0210 13:47:28.128517  625844 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:47:28.128536  625844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:47:28.128625  625844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:47:28.128756  625844 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:47:28.128768  625844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:47:28.128800  625844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:47:28.128920  625844 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:47:28.128930  625844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:47:28.128958  625844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:47:28.129043  625844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-935801 san=[127.0.0.1 192.168.72.152 kubernetes-upgrade-935801 localhost minikube]
	I0210 13:47:28.414114  625844 provision.go:177] copyRemoteCerts
	I0210 13:47:28.414223  625844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:47:28.414260  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.417336  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.417705  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.417742  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.417930  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:28.418142  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.418367  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:28.418545  625844 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:47:28.503243  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:47:28.530898  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0210 13:47:28.555735  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:47:28.580650  625844 provision.go:87] duration metric: took 458.563331ms to configureAuth
	I0210 13:47:28.580688  625844 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:47:28.580870  625844 config.go:182] Loaded profile config "kubernetes-upgrade-935801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:47:28.580999  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.583598  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.584011  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.584056  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.584197  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:28.584424  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.584586  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.584710  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:28.584864  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:28.585074  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:28.585098  625844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:47:28.825222  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:47:28.825251  625844 main.go:141] libmachine: Checking connection to Docker...
	I0210 13:47:28.825260  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetURL
	I0210 13:47:28.826566  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | using libvirt version 6000000
	I0210 13:47:28.828844  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.829251  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.829276  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.829494  625844 main.go:141] libmachine: Docker is up and running!
	I0210 13:47:28.829518  625844 main.go:141] libmachine: Reticulating splines...
	I0210 13:47:28.829527  625844 client.go:171] duration metric: took 23.592262743s to LocalClient.Create
	I0210 13:47:28.829556  625844 start.go:167] duration metric: took 23.592333508s to libmachine.API.Create "kubernetes-upgrade-935801"
	I0210 13:47:28.829570  625844 start.go:293] postStartSetup for "kubernetes-upgrade-935801" (driver="kvm2")
	I0210 13:47:28.829586  625844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:47:28.829608  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:28.829832  625844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:47:28.829868  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.832093  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.832482  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.832527  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.832634  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:28.832818  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.832989  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:28.833137  625844 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:47:28.919832  625844 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:47:28.924406  625844 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:47:28.924436  625844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:47:28.924515  625844 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:47:28.924589  625844 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:47:28.924686  625844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:47:28.934949  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:47:28.959205  625844 start.go:296] duration metric: took 129.615225ms for postStartSetup
	I0210 13:47:28.959263  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetConfigRaw
	I0210 13:47:28.959879  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:47:28.962216  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.962575  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.962617  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.962849  625844 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/config.json ...
	I0210 13:47:28.963068  625844 start.go:128] duration metric: took 23.749489814s to createHost
	I0210 13:47:28.963101  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:28.965358  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.965708  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:28.965728  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:28.965848  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:28.966031  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.966188  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:28.966335  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:28.966512  625844 main.go:141] libmachine: Using SSH client type: native
	I0210 13:47:28.966677  625844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:47:28.966687  625844 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:47:29.077477  625844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739195249.030892449
	
	I0210 13:47:29.077503  625844 fix.go:216] guest clock: 1739195249.030892449
	I0210 13:47:29.077509  625844 fix.go:229] Guest: 2025-02-10 13:47:29.030892449 +0000 UTC Remote: 2025-02-10 13:47:28.96308682 +0000 UTC m=+61.628333856 (delta=67.805629ms)
	I0210 13:47:29.077539  625844 fix.go:200] guest clock delta is within tolerance: 67.805629ms
	I0210 13:47:29.077545  625844 start.go:83] releasing machines lock for "kubernetes-upgrade-935801", held for 23.86418207s
	I0210 13:47:29.077575  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:29.077884  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:47:29.080652  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.081031  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:29.081062  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.081226  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:29.081718  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:29.081893  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:47:29.081984  625844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:47:29.082042  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:29.082156  625844 ssh_runner.go:195] Run: cat /version.json
	I0210 13:47:29.082196  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:47:29.084736  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.084837  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.085074  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:29.085113  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.085268  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:29.085351  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:29.085375  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:29.085405  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:47:29.085489  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:29.085568  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:47:29.085649  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:29.085707  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:47:29.085762  625844 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:47:29.085816  625844 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:47:29.169754  625844 ssh_runner.go:195] Run: systemctl --version
	I0210 13:47:29.192835  625844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:47:29.355620  625844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:47:29.361798  625844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:47:29.361872  625844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:47:29.381240  625844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:47:29.381269  625844 start.go:495] detecting cgroup driver to use...
	I0210 13:47:29.381341  625844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:47:29.400620  625844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:47:29.417923  625844 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:47:29.417986  625844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:47:29.433811  625844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:47:29.449468  625844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:47:29.584294  625844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:47:29.759825  625844 docker.go:233] disabling docker service ...
	I0210 13:47:29.759892  625844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:47:29.777671  625844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:47:29.795583  625844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:47:29.940577  625844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:47:30.067124  625844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:47:30.082094  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:47:30.103744  625844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 13:47:30.103805  625844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:47:30.114937  625844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:47:30.115021  625844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:47:30.125770  625844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:47:30.136485  625844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:47:30.147325  625844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:47:30.158580  625844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:47:30.168104  625844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:47:30.168179  625844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:47:30.186692  625844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:47:30.198747  625844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:47:30.318381  625844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:47:30.423308  625844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:47:30.423394  625844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:47:30.428552  625844 start.go:563] Will wait 60s for crictl version
	I0210 13:47:30.428617  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:30.432778  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:47:30.476938  625844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:47:30.477038  625844 ssh_runner.go:195] Run: crio --version
	I0210 13:47:30.508803  625844 ssh_runner.go:195] Run: crio --version
	I0210 13:47:30.542961  625844 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 13:47:30.544046  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:47:30.546754  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:30.547045  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:47:21 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:47:30.547070  625844 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:47:30.547251  625844 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 13:47:30.551379  625844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:47:30.566807  625844 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935801 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:47:30.566919  625844 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:47:30.566963  625844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:47:30.615461  625844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:47:30.615564  625844 ssh_runner.go:195] Run: which lz4
	I0210 13:47:30.620954  625844 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:47:30.627050  625844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:47:30.627085  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 13:47:32.405501  625844 crio.go:462] duration metric: took 1.784577533s to copy over tarball
	I0210 13:47:32.405587  625844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:47:35.523065  625844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.117436256s)
	I0210 13:47:35.523104  625844 crio.go:469] duration metric: took 3.117566089s to extract the tarball
	I0210 13:47:35.523114  625844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:47:35.579570  625844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:47:35.649301  625844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:47:35.649335  625844 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:47:35.649375  625844 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:47:35.649695  625844 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:35.649819  625844 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:35.649896  625844 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:35.649972  625844 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:35.650047  625844 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 13:47:35.650118  625844 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:35.650177  625844 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 13:47:35.652228  625844 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 13:47:35.652723  625844 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:35.652731  625844 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:35.652779  625844 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:35.652822  625844 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:35.652717  625844 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 13:47:35.652910  625844 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:35.652930  625844 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:47:35.859500  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 13:47:35.872349  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:35.911556  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:35.930002  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:35.937366  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:35.943422  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 13:47:35.966178  625844 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 13:47:35.966236  625844 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 13:47:35.966289  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:35.985728  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:36.046568  625844 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 13:47:36.046621  625844 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:36.046670  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.127711  625844 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 13:47:36.127763  625844 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:36.127814  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.128125  625844 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 13:47:36.128165  625844 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:36.128206  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.148408  625844 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 13:47:36.148469  625844 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:36.148525  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.148637  625844 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 13:47:36.148660  625844 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 13:47:36.148687  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.148753  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:47:36.181793  625844 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 13:47:36.181837  625844 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:36.181877  625844 ssh_runner.go:195] Run: which crictl
	I0210 13:47:36.181896  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:36.181936  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:36.181961  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:36.257688  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:47:36.257785  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:36.257836  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:47:36.257885  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:36.307481  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:36.307622  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:36.367181  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:36.473741  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:47:36.473805  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:36.473890  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:36.473933  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:47:36.556543  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:47:36.556605  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:47:36.556655  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:47:36.683685  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:47:36.683761  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 13:47:36.683818  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:47:36.698529  625844 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:47:36.732415  625844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:47:36.756429  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 13:47:36.756465  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 13:47:36.756504  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 13:47:36.812845  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 13:47:36.820320  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 13:47:36.821564  625844 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 13:47:36.946787  625844 cache_images.go:92] duration metric: took 1.297433912s to LoadCachedImages
	W0210 13:47:36.946913  625844 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0210 13:47:36.946937  625844 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.20.0 crio true true} ...
	I0210 13:47:36.947063  625844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-935801 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:47:36.947139  625844 ssh_runner.go:195] Run: crio config
	I0210 13:47:37.009330  625844 cni.go:84] Creating CNI manager for ""
	I0210 13:47:37.009366  625844 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:47:37.009379  625844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:47:37.009415  625844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-935801 NodeName:kubernetes-upgrade-935801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 13:47:37.009615  625844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-935801"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:47:37.009700  625844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 13:47:37.024603  625844 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:47:37.024696  625844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:47:37.035602  625844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0210 13:47:37.055690  625844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:47:37.079660  625844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0210 13:47:37.100547  625844 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0210 13:47:37.105063  625844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:47:37.120921  625844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:47:37.263016  625844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:47:37.286600  625844 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801 for IP: 192.168.72.152
	I0210 13:47:37.286634  625844 certs.go:194] generating shared ca certs ...
	I0210 13:47:37.286660  625844 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.286864  625844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:47:37.286936  625844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:47:37.286961  625844 certs.go:256] generating profile certs ...
	I0210 13:47:37.287046  625844 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.key
	I0210 13:47:37.287090  625844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.crt with IP's: []
	I0210 13:47:37.429057  625844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.crt ...
	I0210 13:47:37.429090  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.crt: {Name:mk376188eb9f1b4db69b010e44980608231dc71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.429295  625844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.key ...
	I0210 13:47:37.429314  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.key: {Name:mk183cf1b8a673c3124173974456d005b3ff6da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.429437  625844 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key.0f2d8851
	I0210 13:47:37.429465  625844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt.0f2d8851 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.152]
	I0210 13:47:37.541984  625844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt.0f2d8851 ...
	I0210 13:47:37.542075  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt.0f2d8851: {Name:mka284bbbc41f02982553ba578f46dfa2e755546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.542277  625844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key.0f2d8851 ...
	I0210 13:47:37.542297  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key.0f2d8851: {Name:mk617df851b60b44e64050f2cdcfd879e8f9846b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.542406  625844 certs.go:381] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt.0f2d8851 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt
	I0210 13:47:37.542503  625844 certs.go:385] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key.0f2d8851 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key
	I0210 13:47:37.542581  625844 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key
	I0210 13:47:37.542603  625844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.crt with IP's: []
	I0210 13:47:37.727983  625844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.crt ...
	I0210 13:47:37.728018  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.crt: {Name:mk89bc09970ac74d41d36daba6ef8fe13931a78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.728201  625844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key ...
	I0210 13:47:37.728221  625844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key: {Name:mk1bb4558572e49daedfa620f4b286f1ce2ecd77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:47:37.728449  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:47:37.728502  625844 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:47:37.728519  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:47:37.728556  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:47:37.728589  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:47:37.728622  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:47:37.728677  625844 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:47:37.729250  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:47:37.759517  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:47:37.789859  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:47:37.816715  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:47:37.849671  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 13:47:37.910513  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:47:37.939302  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:47:38.032082  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:47:38.060813  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:47:38.089787  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:47:38.125207  625844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:47:38.153931  625844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:47:38.172495  625844 ssh_runner.go:195] Run: openssl version
	I0210 13:47:38.178697  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:47:38.196498  625844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:47:38.206779  625844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:47:38.206949  625844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:47:38.217267  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:47:38.230763  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:47:38.260903  625844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:47:38.275186  625844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:47:38.275265  625844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:47:38.290587  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:47:38.306609  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:47:38.320564  625844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:47:38.327225  625844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:47:38.327308  625844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:47:38.335897  625844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:47:38.351356  625844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:47:38.356396  625844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 13:47:38.356483  625844 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935801 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:47:38.356601  625844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:47:38.356684  625844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:47:38.406322  625844 cri.go:89] found id: ""
	I0210 13:47:38.406420  625844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:47:38.420643  625844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:47:38.434714  625844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:47:38.448636  625844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:47:38.448679  625844 kubeadm.go:157] found existing configuration files:
	
	I0210 13:47:38.448765  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:47:38.463068  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:47:38.463155  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:47:38.474054  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:47:38.486045  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:47:38.486125  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:47:38.496324  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:47:38.505784  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:47:38.505864  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:47:38.520815  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:47:38.535456  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:47:38.535574  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:47:38.548530  625844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:47:38.690643  625844 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:47:38.690729  625844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:47:38.871159  625844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:47:38.871376  625844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:47:38.871527  625844 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:47:39.091517  625844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:47:39.147960  625844 out.go:235]   - Generating certificates and keys ...
	I0210 13:47:39.148074  625844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:47:39.148136  625844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:47:39.237004  625844 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 13:47:39.374108  625844 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 13:47:39.551080  625844 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 13:47:39.629108  625844 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 13:47:39.746313  625844 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 13:47:39.746507  625844 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0210 13:47:40.020544  625844 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 13:47:40.020942  625844 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0210 13:47:40.419588  625844 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 13:47:40.484468  625844 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 13:47:40.709431  625844 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 13:47:40.709799  625844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:47:40.784950  625844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:47:41.075966  625844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:47:41.432889  625844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:47:41.628744  625844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:47:41.646013  625844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:47:41.650671  625844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:47:41.650748  625844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:47:41.802292  625844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:47:41.804131  625844 out.go:235]   - Booting up control plane ...
	I0210 13:47:41.804390  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:47:41.813629  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:47:41.817161  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:47:41.817281  625844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:47:41.821604  625844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:48:21.785059  625844 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:48:21.785549  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:48:21.785845  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:48:26.785100  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:48:26.785408  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:48:36.783785  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:48:36.784109  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:48:56.783692  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:48:56.783909  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:49:36.782289  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:49:36.782560  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:49:36.782580  625844 kubeadm.go:310] 
	I0210 13:49:36.782633  625844 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:49:36.782694  625844 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:49:36.782705  625844 kubeadm.go:310] 
	I0210 13:49:36.782765  625844 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:49:36.782869  625844 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:49:36.783008  625844 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:49:36.783022  625844 kubeadm.go:310] 
	I0210 13:49:36.783170  625844 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:49:36.783243  625844 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:49:36.783301  625844 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:49:36.783310  625844 kubeadm.go:310] 
	I0210 13:49:36.783465  625844 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:49:36.783601  625844 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:49:36.783612  625844 kubeadm.go:310] 
	I0210 13:49:36.783764  625844 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:49:36.783863  625844 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:49:36.783934  625844 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:49:36.784000  625844 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:49:36.784010  625844 kubeadm.go:310] 
	I0210 13:49:36.784763  625844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:49:36.784902  625844 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:49:36.785016  625844 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 13:49:36.785146  625844 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935801 localhost] and IPs [192.168.72.152 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:49:36.785199  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:49:38.396604  625844 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.611360603s)
	I0210 13:49:38.396705  625844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:49:38.414824  625844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:49:38.431288  625844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:49:38.431319  625844 kubeadm.go:157] found existing configuration files:
	
	I0210 13:49:38.431383  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:49:38.446675  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:49:38.446754  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:49:38.461786  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:49:38.476981  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:49:38.477051  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:49:38.492632  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:49:38.506207  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:49:38.506284  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:49:38.520482  625844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:49:38.532629  625844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:49:38.532704  625844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:49:38.544794  625844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:49:38.642177  625844 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:49:38.643991  625844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:49:38.816548  625844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:49:38.816847  625844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:49:38.816971  625844 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:49:39.077503  625844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:49:39.078935  625844 out.go:235]   - Generating certificates and keys ...
	I0210 13:49:39.079029  625844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:49:39.079113  625844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:49:39.079229  625844 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:49:39.079315  625844 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:49:39.079773  625844 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:49:39.079969  625844 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:49:39.080608  625844 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:49:39.080965  625844 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:49:39.081490  625844 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:49:39.082108  625844 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:49:39.082182  625844 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:49:39.082263  625844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:49:39.334055  625844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:49:39.458732  625844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:49:39.574389  625844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:49:39.874499  625844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:49:39.893662  625844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:49:39.895773  625844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:49:39.895828  625844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:49:40.077821  625844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:49:40.079292  625844 out.go:235]   - Booting up control plane ...
	I0210 13:49:40.079424  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:49:40.095196  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:49:40.098041  625844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:49:40.099219  625844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:49:40.102681  625844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:50:20.104430  625844 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:50:20.104783  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:50:20.105072  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:50:25.105503  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:50:25.105716  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:50:35.106283  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:50:35.106463  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:50:55.107724  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:50:55.107939  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:51:35.108481  625844 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:51:35.108720  625844 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:51:35.108735  625844 kubeadm.go:310] 
	I0210 13:51:35.108775  625844 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:51:35.108843  625844 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:51:35.108870  625844 kubeadm.go:310] 
	I0210 13:51:35.108930  625844 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:51:35.108999  625844 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:51:35.109137  625844 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:51:35.109150  625844 kubeadm.go:310] 
	I0210 13:51:35.109343  625844 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:51:35.109401  625844 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:51:35.109445  625844 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:51:35.109457  625844 kubeadm.go:310] 
	I0210 13:51:35.109572  625844 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:51:35.109708  625844 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:51:35.109728  625844 kubeadm.go:310] 
	I0210 13:51:35.109882  625844 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:51:35.110002  625844 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:51:35.110116  625844 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:51:35.110263  625844 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:51:35.110280  625844 kubeadm.go:310] 
	I0210 13:51:35.110942  625844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:51:35.111062  625844 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:51:35.111177  625844 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:51:35.111280  625844 kubeadm.go:394] duration metric: took 3m56.75480457s to StartCluster
	I0210 13:51:35.111349  625844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:51:35.111420  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:51:35.160134  625844 cri.go:89] found id: ""
	I0210 13:51:35.160173  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.160184  625844 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:51:35.160194  625844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:51:35.160306  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:51:35.203061  625844 cri.go:89] found id: ""
	I0210 13:51:35.203097  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.203110  625844 logs.go:284] No container was found matching "etcd"
	I0210 13:51:35.203119  625844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:51:35.203186  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:51:35.251922  625844 cri.go:89] found id: ""
	I0210 13:51:35.251960  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.251973  625844 logs.go:284] No container was found matching "coredns"
	I0210 13:51:35.251981  625844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:51:35.252053  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:51:35.293295  625844 cri.go:89] found id: ""
	I0210 13:51:35.293340  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.293353  625844 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:51:35.293363  625844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:51:35.293439  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:51:35.332679  625844 cri.go:89] found id: ""
	I0210 13:51:35.332712  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.332722  625844 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:51:35.332728  625844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:51:35.332787  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:51:35.375213  625844 cri.go:89] found id: ""
	I0210 13:51:35.375253  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.375265  625844 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:51:35.375274  625844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:51:35.375347  625844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:51:35.420721  625844 cri.go:89] found id: ""
	I0210 13:51:35.420748  625844 logs.go:282] 0 containers: []
	W0210 13:51:35.420759  625844 logs.go:284] No container was found matching "kindnet"
	I0210 13:51:35.420775  625844 logs.go:123] Gathering logs for kubelet ...
	I0210 13:51:35.420797  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:51:35.477584  625844 logs.go:123] Gathering logs for dmesg ...
	I0210 13:51:35.477642  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:51:35.493721  625844 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:51:35.493759  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:51:35.658122  625844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:51:35.658151  625844 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:51:35.658169  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:51:35.797994  625844 logs.go:123] Gathering logs for container status ...
	I0210 13:51:35.798053  625844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 13:51:35.881015  625844 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:51:35.881088  625844 out.go:270] * 
	* 
	W0210 13:51:35.881172  625844 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:51:35.881201  625844 out.go:270] * 
	* 
	W0210 13:51:35.882090  625844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:51:35.885585  625844 out.go:201] 
	W0210 13:51:35.886558  625844 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:51:35.886611  625844 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:51:35.886640  625844 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:51:35.888271  625844 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-935801
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-935801: (1.690698704s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-935801 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-935801 status --format={{.Host}}: exit status 7 (95.697658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.409459366s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-935801 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (137.291449ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-935801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-935801
	    minikube start -p kubernetes-upgrade-935801 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9358012 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-935801 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-935801 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (26.820653456s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-10 13:52:49.234095533 +0000 UTC m=+4119.609985203
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-935801 -n kubernetes-upgrade-935801
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-935801 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-935801 logs -n 25: (1.942698696s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo docker                        | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo cat                           | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo                               | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo find                          | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-020784 sudo crio                          | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-020784                                    | kindnet-020784            | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC | 10 Feb 25 13:51 UTC |
	| start   | -p custom-flannel-020784                             | custom-flannel-020784     | jenkins | v1.35.0 | 10 Feb 25 13:51 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935801                         | kubernetes-upgrade-935801 | jenkins | v1.35.0 | 10 Feb 25 13:52 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935801                         | kubernetes-upgrade-935801 | jenkins | v1.35.0 | 10 Feb 25 13:52 UTC | 10 Feb 25 13:52 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 13:52:22
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 13:52:22.470584  632910 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:52:22.470694  632910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:52:22.470698  632910 out.go:358] Setting ErrFile to fd 2...
	I0210 13:52:22.470702  632910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:52:22.470917  632910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:52:22.471529  632910 out.go:352] Setting JSON to false
	I0210 13:52:22.473067  632910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12887,"bootTime":1739182655,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:52:22.473179  632910 start.go:139] virtualization: kvm guest
	I0210 13:52:22.475051  632910 out.go:177] * [kubernetes-upgrade-935801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:52:22.476256  632910 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:52:22.476299  632910 notify.go:220] Checking for updates...
	I0210 13:52:22.478417  632910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:52:22.479615  632910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:52:22.480758  632910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:52:22.482103  632910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:52:22.483257  632910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:52:22.485335  632910 config.go:182] Loaded profile config "kubernetes-upgrade-935801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:52:22.486585  632910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:52:22.486685  632910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:52:22.510105  632910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0210 13:52:22.510887  632910 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:52:22.512407  632910 main.go:141] libmachine: Using API Version  1
	I0210 13:52:22.512469  632910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:52:22.512873  632910 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:52:22.514534  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:22.514897  632910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:52:22.515396  632910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:52:22.515472  632910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:52:22.536682  632910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0210 13:52:22.537229  632910 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:52:22.537783  632910 main.go:141] libmachine: Using API Version  1
	I0210 13:52:22.537806  632910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:52:22.538352  632910 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:52:22.538550  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:22.576828  632910 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:52:22.577964  632910 start.go:297] selected driver: kvm2
	I0210 13:52:22.577982  632910 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-up
grade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:52:22.578112  632910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:52:22.578791  632910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:52:22.578898  632910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:52:22.595707  632910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:52:22.596367  632910 cni.go:84] Creating CNI manager for ""
	I0210 13:52:22.596431  632910 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:52:22.596471  632910 start.go:340] cluster config:
	{Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:52:22.596615  632910 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:52:22.598391  632910 out.go:177] * Starting "kubernetes-upgrade-935801" primary control-plane node in "kubernetes-upgrade-935801" cluster
	I0210 13:52:20.025465  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:20.025927  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | unable to find current IP address of domain custom-flannel-020784 in network mk-custom-flannel-020784
	I0210 13:52:20.025958  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | I0210 13:52:20.025909  632612 retry.go:31] will retry after 3.581782795s: waiting for domain to come up
	I0210 13:52:23.610131  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.610774  632557 main.go:141] libmachine: (custom-flannel-020784) found domain IP: 192.168.61.77
	I0210 13:52:23.610805  632557 main.go:141] libmachine: (custom-flannel-020784) reserving static IP address...
	I0210 13:52:23.610820  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has current primary IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.611213  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | unable to find host DHCP lease matching {name: "custom-flannel-020784", mac: "52:54:00:02:11:ec", ip: "192.168.61.77"} in network mk-custom-flannel-020784
	I0210 13:52:23.699213  632557 main.go:141] libmachine: (custom-flannel-020784) reserved static IP address 192.168.61.77 for domain custom-flannel-020784
	I0210 13:52:23.699247  632557 main.go:141] libmachine: (custom-flannel-020784) waiting for SSH...
	I0210 13:52:23.699259  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | Getting to WaitForSSH function...
	I0210 13:52:23.702460  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.702951  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:23.703004  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.703104  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | Using SSH client type: external
	I0210 13:52:23.703132  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa (-rw-------)
	I0210 13:52:23.703183  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:52:23.703208  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | About to run SSH command:
	I0210 13:52:23.703226  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | exit 0
	I0210 13:52:23.840471  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | SSH cmd err, output: <nil>: 
	I0210 13:52:23.840744  632557 main.go:141] libmachine: (custom-flannel-020784) KVM machine creation complete
	I0210 13:52:23.841142  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetConfigRaw
	I0210 13:52:23.841734  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:23.841962  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:23.842146  632557 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 13:52:23.842163  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetState
	I0210 13:52:23.843645  632557 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 13:52:23.843662  632557 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 13:52:23.843669  632557 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 13:52:23.843677  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:23.846316  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.846790  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:23.846818  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.847011  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:23.847186  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:23.847368  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:23.847486  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:23.847628  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:23.847837  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:23.847852  632557 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 13:52:23.964478  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:52:23.964510  632557 main.go:141] libmachine: Detecting the provisioner...
	I0210 13:52:23.964521  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:23.967849  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.968287  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:23.968334  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:23.968510  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:23.968732  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:23.968923  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:23.969085  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:23.969283  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:23.969512  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:23.969528  632557 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 13:52:24.089254  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 13:52:24.089364  632557 main.go:141] libmachine: found compatible host: buildroot
	I0210 13:52:24.089378  632557 main.go:141] libmachine: Provisioning with buildroot...
	I0210 13:52:24.089393  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetMachineName
	I0210 13:52:24.089612  632557 buildroot.go:166] provisioning hostname "custom-flannel-020784"
	I0210 13:52:24.089636  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetMachineName
	I0210 13:52:24.089761  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.092712  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.093152  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.093203  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.093389  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:24.093583  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.093752  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.093867  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:24.094025  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:24.094247  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:24.094263  632557 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-020784 && echo "custom-flannel-020784" | sudo tee /etc/hostname
	I0210 13:52:24.231053  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-020784
	
	I0210 13:52:24.231089  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.234576  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.235000  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.235043  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.235202  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:24.235430  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.235613  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.235785  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:24.235964  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:24.236195  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:24.236218  632557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-020784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-020784/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-020784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:52:24.358156  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:52:24.358193  632557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:52:24.358236  632557 buildroot.go:174] setting up certificates
	I0210 13:52:24.358252  632557 provision.go:84] configureAuth start
	I0210 13:52:24.358267  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetMachineName
	I0210 13:52:24.358621  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetIP
	I0210 13:52:24.361216  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.361590  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.361615  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.361775  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.364252  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.364583  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.364606  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.364789  632557 provision.go:143] copyHostCerts
	I0210 13:52:24.364866  632557 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:52:24.364885  632557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:52:24.364963  632557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:52:24.365096  632557 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:52:24.365109  632557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:52:24.365140  632557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:52:24.365253  632557 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:52:24.365264  632557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:52:24.365294  632557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:52:24.365378  632557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-020784 san=[127.0.0.1 192.168.61.77 custom-flannel-020784 localhost minikube]
	I0210 13:52:24.465394  632557 provision.go:177] copyRemoteCerts
	I0210 13:52:24.465460  632557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:52:24.465488  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.468603  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.469007  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.469065  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.469317  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:24.469546  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.469761  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:24.469938  632557 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa Username:docker}
	I0210 13:52:24.554430  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:52:24.579694  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0210 13:52:24.604023  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:52:24.629771  632557 provision.go:87] duration metric: took 271.504491ms to configureAuth
	I0210 13:52:24.629801  632557 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:52:24.630016  632557 config.go:182] Loaded profile config "custom-flannel-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:52:24.630110  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.632901  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.633259  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.633295  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.633518  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:24.633778  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.633986  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.634164  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:24.634349  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:24.634604  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:24.634631  632557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:52:20.284494  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:22.747400  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:22.599726  632910 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:52:22.599779  632910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:52:22.599794  632910 cache.go:56] Caching tarball of preloaded images
	I0210 13:52:22.599919  632910 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:52:22.599957  632910 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:52:22.600111  632910 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/config.json ...
	I0210 13:52:22.600440  632910 start.go:360] acquireMachinesLock for kubernetes-upgrade-935801: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:52:25.169382  632910 start.go:364] duration metric: took 2.568898013s to acquireMachinesLock for "kubernetes-upgrade-935801"
	I0210 13:52:25.169441  632910 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:52:25.169469  632910 fix.go:54] fixHost starting: 
	I0210 13:52:25.169913  632910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:52:25.169973  632910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:52:25.190691  632910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37507
	I0210 13:52:25.191321  632910 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:52:25.192081  632910 main.go:141] libmachine: Using API Version  1
	I0210 13:52:25.192106  632910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:52:25.192549  632910 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:52:25.192745  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:25.192918  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetState
	I0210 13:52:25.194402  632910 fix.go:112] recreateIfNeeded on kubernetes-upgrade-935801: state=Running err=<nil>
	W0210 13:52:25.194425  632910 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:52:25.196225  632910 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-935801" VM ...
	I0210 13:52:24.899386  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:52:24.899415  632557 main.go:141] libmachine: Checking connection to Docker...
	I0210 13:52:24.899426  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetURL
	I0210 13:52:24.900801  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | using libvirt version 6000000
	I0210 13:52:24.903463  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.903782  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.903812  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.904032  632557 main.go:141] libmachine: Docker is up and running!
	I0210 13:52:24.904048  632557 main.go:141] libmachine: Reticulating splines...
	I0210 13:52:24.904057  632557 client.go:171] duration metric: took 25.685280353s to LocalClient.Create
	I0210 13:52:24.904101  632557 start.go:167] duration metric: took 25.685373785s to libmachine.API.Create "custom-flannel-020784"
	I0210 13:52:24.904115  632557 start.go:293] postStartSetup for "custom-flannel-020784" (driver="kvm2")
	I0210 13:52:24.904128  632557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:52:24.904166  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:24.904459  632557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:52:24.904489  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:24.906835  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.907161  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:24.907185  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:24.907323  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:24.907518  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:24.907703  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:24.907825  632557 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa Username:docker}
	I0210 13:52:24.995166  632557 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:52:24.999722  632557 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:52:24.999755  632557 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:52:24.999832  632557 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:52:24.999946  632557 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:52:25.000070  632557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:52:25.013413  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:52:25.046814  632557 start.go:296] duration metric: took 142.679051ms for postStartSetup
	I0210 13:52:25.046882  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetConfigRaw
	I0210 13:52:25.047679  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetIP
	I0210 13:52:25.050516  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.050939  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:25.050972  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.051302  632557 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/config.json ...
	I0210 13:52:25.051567  632557 start.go:128] duration metric: took 25.85755662s to createHost
	I0210 13:52:25.051639  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:25.054198  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.054576  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:25.054626  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.054755  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:25.054932  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:25.055077  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:25.055222  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:25.055383  632557 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:25.055594  632557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.77 22 <nil> <nil>}
	I0210 13:52:25.055612  632557 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:52:25.169173  632557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739195545.151193167
	
	I0210 13:52:25.169204  632557 fix.go:216] guest clock: 1739195545.151193167
	I0210 13:52:25.169212  632557 fix.go:229] Guest: 2025-02-10 13:52:25.151193167 +0000 UTC Remote: 2025-02-10 13:52:25.051619625 +0000 UTC m=+30.248831829 (delta=99.573542ms)
	I0210 13:52:25.169266  632557 fix.go:200] guest clock delta is within tolerance: 99.573542ms
	I0210 13:52:25.169274  632557 start.go:83] releasing machines lock for "custom-flannel-020784", held for 25.975496807s
	I0210 13:52:25.169324  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:25.169626  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetIP
	I0210 13:52:25.173398  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.173845  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:25.173874  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.174136  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:25.174774  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:25.174969  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .DriverName
	I0210 13:52:25.175079  632557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:52:25.175127  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:25.175201  632557 ssh_runner.go:195] Run: cat /version.json
	I0210 13:52:25.175228  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHHostname
	I0210 13:52:25.178222  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.178308  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.178644  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:25.178682  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:25.178706  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.178957  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:25.178968  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:25.179100  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHPort
	I0210 13:52:25.179194  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:25.179269  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHKeyPath
	I0210 13:52:25.179350  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:25.179430  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetSSHUsername
	I0210 13:52:25.179521  632557 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa Username:docker}
	I0210 13:52:25.179516  632557 sshutil.go:53] new ssh client: &{IP:192.168.61.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/custom-flannel-020784/id_rsa Username:docker}
	I0210 13:52:25.265439  632557 ssh_runner.go:195] Run: systemctl --version
	I0210 13:52:25.285613  632557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:52:25.457775  632557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:52:25.465157  632557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:52:25.465238  632557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:52:25.489217  632557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:52:25.489240  632557 start.go:495] detecting cgroup driver to use...
	I0210 13:52:25.489306  632557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:52:25.513534  632557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:52:25.534156  632557 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:52:25.534237  632557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:52:25.550060  632557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:52:25.566323  632557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:52:25.772903  632557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:52:25.974615  632557 docker.go:233] disabling docker service ...
	I0210 13:52:25.974702  632557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:52:25.993155  632557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:52:26.013176  632557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:52:26.157472  632557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:52:26.300694  632557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:52:26.315426  632557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:52:26.339644  632557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:52:26.339723  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.354808  632557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:52:26.354904  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.368647  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.379761  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.390403  632557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:52:26.401172  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.411766  632557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.430043  632557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:26.441565  632557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:52:26.451067  632557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:52:26.451163  632557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:52:26.465303  632557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:52:26.475208  632557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:52:26.585144  632557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:52:26.686293  632557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:52:26.686385  632557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:52:26.693443  632557 start.go:563] Will wait 60s for crictl version
	I0210 13:52:26.693515  632557 ssh_runner.go:195] Run: which crictl
	I0210 13:52:26.699015  632557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:52:26.740330  632557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:52:26.740437  632557 ssh_runner.go:195] Run: crio --version
	I0210 13:52:26.770066  632557 ssh_runner.go:195] Run: crio --version
	I0210 13:52:26.799741  632557 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:52:22.155082  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:22.155119  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:22.223879  628186 logs.go:123] Gathering logs for kube-controller-manager [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2] ...
	I0210 13:52:22.223919  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:24.769824  628186 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0210 13:52:24.770630  628186 api_server.go:269] stopped: https://192.168.39.134:8443/healthz: Get "https://192.168.39.134:8443/healthz": dial tcp 192.168.39.134:8443: connect: connection refused
	I0210 13:52:24.770694  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:52:24.770763  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:52:24.826741  628186 cri.go:89] found id: "416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:24.826770  628186 cri.go:89] found id: ""
	I0210 13:52:24.826781  628186 logs.go:282] 1 containers: [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15]
	I0210 13:52:24.826867  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:24.831329  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:52:24.831388  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:52:24.873707  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:24.873743  628186 cri.go:89] found id: ""
	I0210 13:52:24.873756  628186 logs.go:282] 1 containers: [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a]
	I0210 13:52:24.873827  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:24.877971  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:52:24.878051  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:52:24.918950  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:24.918976  628186 cri.go:89] found id: ""
	I0210 13:52:24.918986  628186 logs.go:282] 1 containers: [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb]
	I0210 13:52:24.919045  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:24.923979  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:52:24.924074  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:52:24.963925  628186 cri.go:89] found id: "7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:24.963958  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:24.963964  628186 cri.go:89] found id: ""
	I0210 13:52:24.963974  628186 logs.go:282] 2 containers: [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a]
	I0210 13:52:24.964038  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:24.968473  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:24.972505  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:52:24.972566  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:52:25.013413  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:25.013442  628186 cri.go:89] found id: ""
	I0210 13:52:25.013452  628186 logs.go:282] 1 containers: [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6]
	I0210 13:52:25.013509  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:25.018390  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:52:25.018457  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:52:25.065686  628186 cri.go:89] found id: "3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:25.065714  628186 cri.go:89] found id: ""
	I0210 13:52:25.065724  628186 logs.go:282] 1 containers: [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2]
	I0210 13:52:25.065788  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:25.070003  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:52:25.070061  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:52:25.105697  628186 cri.go:89] found id: ""
	I0210 13:52:25.105730  628186 logs.go:282] 0 containers: []
	W0210 13:52:25.105743  628186 logs.go:284] No container was found matching "kindnet"
	I0210 13:52:25.105755  628186 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:52:25.105771  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:52:25.412470  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 13:52:25.412511  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:52:25.565510  628186 logs.go:123] Gathering logs for etcd [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a] ...
	I0210 13:52:25.565557  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:25.619258  628186 logs.go:123] Gathering logs for kube-scheduler [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f] ...
	I0210 13:52:25.619298  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:25.705957  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:25.706015  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:25.759124  628186 logs.go:123] Gathering logs for kube-proxy [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6] ...
	I0210 13:52:25.759167  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:25.810969  628186 logs.go:123] Gathering logs for kube-controller-manager [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2] ...
	I0210 13:52:25.811018  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:25.861895  628186 logs.go:123] Gathering logs for container status ...
	I0210 13:52:25.861938  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:52:25.914185  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 13:52:25.914227  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:52:25.934932  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:52:25.934965  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:52:26.020713  628186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:52:26.020746  628186 logs.go:123] Gathering logs for kube-apiserver [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15] ...
	I0210 13:52:26.020763  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:26.075740  628186 logs.go:123] Gathering logs for coredns [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb] ...
	I0210 13:52:26.075784  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:25.197451  632910 machine.go:93] provisionDockerMachine start ...
	I0210 13:52:25.197470  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:25.197670  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:25.200584  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.201083  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.201127  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.201299  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:25.201484  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.201648  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.201785  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:25.201986  632910 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:25.202262  632910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:52:25.202284  632910 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:52:25.321583  632910 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-935801
	
	I0210 13:52:25.321622  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:52:25.321879  632910 buildroot.go:166] provisioning hostname "kubernetes-upgrade-935801"
	I0210 13:52:25.321915  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:52:25.322129  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:25.325591  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.325960  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.325994  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.326165  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:25.326368  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.326547  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.326721  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:25.326976  632910 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:25.327232  632910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:52:25.327250  632910 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-935801 && echo "kubernetes-upgrade-935801" | sudo tee /etc/hostname
	I0210 13:52:25.478455  632910 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-935801
	
	I0210 13:52:25.478503  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:25.481885  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.482344  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.482372  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.482607  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:25.482807  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.482979  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.483094  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:25.483309  632910 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:25.483505  632910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:52:25.483523  632910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-935801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-935801/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-935801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:52:25.598539  632910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:52:25.598649  632910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:52:25.598740  632910 buildroot.go:174] setting up certificates
	I0210 13:52:25.598791  632910 provision.go:84] configureAuth start
	I0210 13:52:25.598821  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetMachineName
	I0210 13:52:25.599241  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:52:25.602603  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.603048  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.603074  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.603399  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:25.607029  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.607634  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.607758  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.608243  632910 provision.go:143] copyHostCerts
	I0210 13:52:25.608347  632910 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:52:25.608373  632910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:52:25.608445  632910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:52:25.608648  632910 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:52:25.608658  632910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:52:25.608689  632910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:52:25.608779  632910 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:52:25.608784  632910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:52:25.608810  632910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:52:25.608884  632910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-935801 san=[127.0.0.1 192.168.72.152 kubernetes-upgrade-935801 localhost minikube]
	I0210 13:52:25.950826  632910 provision.go:177] copyRemoteCerts
	I0210 13:52:25.950930  632910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:52:25.950977  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:25.953825  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.954234  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:25.954275  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:25.954423  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:25.954627  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:25.954823  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:25.955043  632910 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:52:26.056628  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:52:26.098833  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0210 13:52:26.133981  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 13:52:26.163864  632910 provision.go:87] duration metric: took 565.042932ms to configureAuth
	I0210 13:52:26.163900  632910 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:52:26.164079  632910 config.go:182] Loaded profile config "kubernetes-upgrade-935801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:52:26.164192  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:26.167476  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:26.167958  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:26.168006  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:26.168202  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:26.168453  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:26.168636  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:26.168759  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:26.168930  632910 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:26.169194  632910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:52:26.169223  632910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:52:26.800955  632557 main.go:141] libmachine: (custom-flannel-020784) Calling .GetIP
	I0210 13:52:26.803610  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:26.803993  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:11:ec", ip: ""} in network mk-custom-flannel-020784: {Iface:virbr1 ExpiryTime:2025-02-10 14:52:16 +0000 UTC Type:0 Mac:52:54:00:02:11:ec Iaid: IPaddr:192.168.61.77 Prefix:24 Hostname:custom-flannel-020784 Clientid:01:52:54:00:02:11:ec}
	I0210 13:52:26.804029  632557 main.go:141] libmachine: (custom-flannel-020784) DBG | domain custom-flannel-020784 has defined IP address 192.168.61.77 and MAC address 52:54:00:02:11:ec in network mk-custom-flannel-020784
	I0210 13:52:26.804192  632557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0210 13:52:26.808432  632557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:52:26.821546  632557 kubeadm.go:883] updating cluster {Name:custom-flannel-020784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:custom-flannel-020784 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:52:26.821671  632557 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:52:26.821741  632557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:52:26.854998  632557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 13:52:26.855085  632557 ssh_runner.go:195] Run: which lz4
	I0210 13:52:26.859526  632557 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:52:26.863904  632557 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:52:26.863947  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 13:52:28.353909  632557 crio.go:462] duration metric: took 1.494413957s to copy over tarball
	I0210 13:52:28.353987  632557 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:52:25.245774  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:27.248509  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:29.747979  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:28.626303  628186 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0210 13:52:28.627119  628186 api_server.go:269] stopped: https://192.168.39.134:8443/healthz: Get "https://192.168.39.134:8443/healthz": dial tcp 192.168.39.134:8443: connect: connection refused
	I0210 13:52:28.627189  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:52:28.627253  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:52:28.669210  628186 cri.go:89] found id: "416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:28.669244  628186 cri.go:89] found id: ""
	I0210 13:52:28.669255  628186 logs.go:282] 1 containers: [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15]
	I0210 13:52:28.669317  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.674160  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:52:28.674234  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:52:28.731148  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:28.731176  628186 cri.go:89] found id: ""
	I0210 13:52:28.731186  628186 logs.go:282] 1 containers: [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a]
	I0210 13:52:28.731245  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.737568  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:52:28.737665  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:52:28.779818  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:28.779848  628186 cri.go:89] found id: ""
	I0210 13:52:28.779859  628186 logs.go:282] 1 containers: [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb]
	I0210 13:52:28.779931  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.784549  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:52:28.784631  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:52:28.824658  628186 cri.go:89] found id: "7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:28.824692  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:28.824698  628186 cri.go:89] found id: ""
	I0210 13:52:28.824709  628186 logs.go:282] 2 containers: [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a]
	I0210 13:52:28.824777  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.829175  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.834108  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:52:28.834205  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:52:28.873346  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:28.873380  628186 cri.go:89] found id: ""
	I0210 13:52:28.873391  628186 logs.go:282] 1 containers: [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6]
	I0210 13:52:28.873461  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.878035  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:52:28.878128  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:52:28.918802  628186 cri.go:89] found id: "3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:28.918831  628186 cri.go:89] found id: ""
	I0210 13:52:28.918842  628186 logs.go:282] 1 containers: [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2]
	I0210 13:52:28.918910  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:28.923940  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:52:28.924020  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:52:28.961556  628186 cri.go:89] found id: ""
	I0210 13:52:28.961594  628186 logs.go:282] 0 containers: []
	W0210 13:52:28.961606  628186 logs.go:284] No container was found matching "kindnet"
	I0210 13:52:28.961620  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 13:52:28.961662  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:52:29.077388  628186 logs.go:123] Gathering logs for coredns [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb] ...
	I0210 13:52:29.077431  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:29.115022  628186 logs.go:123] Gathering logs for kube-proxy [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6] ...
	I0210 13:52:29.115069  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:29.156873  628186 logs.go:123] Gathering logs for kube-controller-manager [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2] ...
	I0210 13:52:29.156915  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:29.206068  628186 logs.go:123] Gathering logs for container status ...
	I0210 13:52:29.206100  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:52:29.268410  628186 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:52:29.268454  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:52:29.576574  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 13:52:29.576624  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:52:29.597158  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:52:29.597200  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:52:29.691768  628186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:52:29.691804  628186 logs.go:123] Gathering logs for kube-apiserver [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15] ...
	I0210 13:52:29.691822  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:29.748560  628186 logs.go:123] Gathering logs for etcd [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a] ...
	I0210 13:52:29.748599  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:29.800631  628186 logs.go:123] Gathering logs for kube-scheduler [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f] ...
	I0210 13:52:29.800668  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:29.901164  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:29.901208  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:32.245168  632910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:52:32.245204  632910 machine.go:96] duration metric: took 7.0477392s to provisionDockerMachine
	I0210 13:52:32.245232  632910 start.go:293] postStartSetup for "kubernetes-upgrade-935801" (driver="kvm2")
	I0210 13:52:32.245246  632910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:52:32.245272  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:32.245641  632910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:52:32.245679  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:32.248951  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.249386  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:32.249421  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.249597  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:32.249789  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:32.249935  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:32.250089  632910 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:52:32.339780  632910 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:52:32.345482  632910 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:52:32.345519  632910 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:52:32.345593  632910 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:52:32.345677  632910 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:52:32.345762  632910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:52:32.358592  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:52:32.388209  632910 start.go:296] duration metric: took 142.957401ms for postStartSetup
	I0210 13:52:32.388265  632910 fix.go:56] duration metric: took 7.218796973s for fixHost
	I0210 13:52:32.388312  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:32.391674  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.392065  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:32.392099  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.392242  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:32.392523  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:32.392693  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:32.392869  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:32.393046  632910 main.go:141] libmachine: Using SSH client type: native
	I0210 13:52:32.393305  632910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0210 13:52:32.393323  632910 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:52:30.739257  632557 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.385238864s)
	I0210 13:52:30.739292  632557 crio.go:469] duration metric: took 2.385348203s to extract the tarball
	I0210 13:52:30.739304  632557 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:52:30.777467  632557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:52:30.826591  632557 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:52:30.826621  632557 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:52:30.826632  632557 kubeadm.go:934] updating node { 192.168.61.77 8443 v1.32.1 crio true true} ...
	I0210 13:52:30.826772  632557 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-020784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:custom-flannel-020784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0210 13:52:30.826858  632557 ssh_runner.go:195] Run: crio config
	I0210 13:52:30.882229  632557 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0210 13:52:30.882267  632557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:52:30.882288  632557 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.77 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-020784 NodeName:custom-flannel-020784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:52:30.882409  632557 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-020784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.77"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.77"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:52:30.882467  632557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:52:30.892681  632557 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:52:30.892754  632557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:52:30.902341  632557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0210 13:52:30.919242  632557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:52:30.935534  632557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I0210 13:52:30.951935  632557 ssh_runner.go:195] Run: grep 192.168.61.77	control-plane.minikube.internal$ /etc/hosts
	I0210 13:52:30.956232  632557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:52:30.969310  632557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:52:31.090974  632557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:52:31.107418  632557 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784 for IP: 192.168.61.77
	I0210 13:52:31.107443  632557 certs.go:194] generating shared ca certs ...
	I0210 13:52:31.107461  632557 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:31.107637  632557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:52:31.107692  632557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:52:31.107705  632557 certs.go:256] generating profile certs ...
	I0210 13:52:31.107765  632557 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.key
	I0210 13:52:31.107783  632557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt with IP's: []
	I0210 13:52:31.452733  632557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt ...
	I0210 13:52:31.452767  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: {Name:mka7a54cfc58e259c1f27c3029da14b1b7c9b700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:31.452969  632557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.key ...
	I0210 13:52:31.452986  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.key: {Name:mkce2f77568f7baf27a0b1c89878f890a814200a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:31.453103  632557 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key.cbeb999d
	I0210 13:52:31.453129  632557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt.cbeb999d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.77]
	I0210 13:52:31.899541  632557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt.cbeb999d ...
	I0210 13:52:31.899583  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt.cbeb999d: {Name:mk2ae8ad347f7abacb39caa49b54159d87088890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:31.899794  632557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key.cbeb999d ...
	I0210 13:52:31.899824  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key.cbeb999d: {Name:mk15e20e7a9a328edf36df74cc69988fe69c505c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:31.899942  632557 certs.go:381] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt.cbeb999d -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt
	I0210 13:52:31.900051  632557 certs.go:385] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key.cbeb999d -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key
	I0210 13:52:31.900110  632557 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.key
	I0210 13:52:31.900126  632557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.crt with IP's: []
	I0210 13:52:32.156045  632557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.crt ...
	I0210 13:52:32.156075  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.crt: {Name:mkbde620d568850e99c50088c5ec60483a0d8ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:32.156274  632557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.key ...
	I0210 13:52:32.156316  632557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.key: {Name:mk22e1beaf90cf75af5675199ece436bd9db7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:32.156532  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:52:32.156572  632557 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:52:32.156580  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:52:32.156600  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:52:32.156625  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:52:32.156643  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:52:32.156679  632557 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:52:32.157511  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:52:32.189969  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:52:32.220523  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:52:32.248111  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:52:32.277146  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 13:52:32.301888  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:52:32.327242  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:52:32.353613  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:52:32.382842  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:52:32.410363  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:52:32.434935  632557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:52:32.459911  632557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:52:32.478662  632557 ssh_runner.go:195] Run: openssl version
	I0210 13:52:32.485365  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:52:32.499541  632557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:52:32.504693  632557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:52:32.504753  632557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:52:32.511868  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:52:32.528415  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:52:32.542466  632557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:52:32.548087  632557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:52:32.548144  632557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:52:32.554790  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:52:32.566574  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:52:32.579862  632557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:32.585772  632557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:32.585837  632557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:32.592683  632557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:52:32.605849  632557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:52:32.610330  632557 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 13:52:32.610390  632557 kubeadm.go:392] StartCluster: {Name:custom-flannel-020784 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:custom-flannel-020784 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.77 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:52:32.610490  632557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:52:32.610547  632557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:52:32.654993  632557 cri.go:89] found id: ""
	I0210 13:52:32.655066  632557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:52:32.667275  632557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:52:32.685670  632557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:52:32.702717  632557 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:52:32.702778  632557 kubeadm.go:157] found existing configuration files:
	
	I0210 13:52:32.702838  632557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:52:32.723506  632557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:52:32.723563  632557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:52:32.741874  632557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:52:32.761474  632557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:52:32.761540  632557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:52:32.772460  632557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:52:32.782742  632557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:52:32.782810  632557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:52:32.793780  632557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:52:32.808715  632557 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:52:32.808841  632557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:52:32.823258  632557 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:52:32.898776  632557 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 13:52:32.898849  632557 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:52:33.014273  632557 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:52:33.014435  632557 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:52:33.014583  632557 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 13:52:33.025099  632557 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:52:33.210768  632557 out.go:235]   - Generating certificates and keys ...
	I0210 13:52:33.210896  632557 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:52:33.210979  632557 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:52:33.211071  632557 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 13:52:33.423967  632557 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 13:52:33.855576  632557 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 13:52:34.178149  632557 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 13:52:34.252719  632557 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 13:52:34.252883  632557 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-020784 localhost] and IPs [192.168.61.77 127.0.0.1 ::1]
	I0210 13:52:34.485116  632557 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 13:52:34.485459  632557 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-020784 localhost] and IPs [192.168.61.77 127.0.0.1 ::1]
	I0210 13:52:34.550592  632557 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 13:52:34.657505  632557 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 13:52:32.248751  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:34.747098  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:35.020505  632557 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 13:52:35.021186  632557 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:52:35.285599  632557 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:52:35.439282  632557 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 13:52:35.751086  632557 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:52:35.861986  632557 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:52:36.065209  632557 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:52:36.065877  632557 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:52:36.068995  632557 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:52:32.453446  628186 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0210 13:52:32.454078  628186 api_server.go:269] stopped: https://192.168.39.134:8443/healthz: Get "https://192.168.39.134:8443/healthz": dial tcp 192.168.39.134:8443: connect: connection refused
	I0210 13:52:32.454149  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:52:32.454222  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:52:32.498357  628186 cri.go:89] found id: "416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:32.498383  628186 cri.go:89] found id: ""
	I0210 13:52:32.498393  628186 logs.go:282] 1 containers: [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15]
	I0210 13:52:32.498454  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.503239  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:52:32.503307  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:52:32.547523  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:32.547550  628186 cri.go:89] found id: ""
	I0210 13:52:32.547562  628186 logs.go:282] 1 containers: [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a]
	I0210 13:52:32.547620  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.554581  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:52:32.554647  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:52:32.599836  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:32.599859  628186 cri.go:89] found id: ""
	I0210 13:52:32.599869  628186 logs.go:282] 1 containers: [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb]
	I0210 13:52:32.599925  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.604768  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:52:32.604837  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:52:32.648571  628186 cri.go:89] found id: "7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:32.648600  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:32.648607  628186 cri.go:89] found id: ""
	I0210 13:52:32.648617  628186 logs.go:282] 2 containers: [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a]
	I0210 13:52:32.648679  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.653792  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.658526  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:52:32.658597  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:52:32.702105  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:32.702153  628186 cri.go:89] found id: ""
	I0210 13:52:32.702165  628186 logs.go:282] 1 containers: [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6]
	I0210 13:52:32.702248  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.707581  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:52:32.707645  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:52:32.747616  628186 cri.go:89] found id: "3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:32.747694  628186 cri.go:89] found id: ""
	I0210 13:52:32.747711  628186 logs.go:282] 1 containers: [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2]
	I0210 13:52:32.747774  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:32.752557  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:52:32.752638  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:52:32.796319  628186 cri.go:89] found id: ""
	I0210 13:52:32.796353  628186 logs.go:282] 0 containers: []
	W0210 13:52:32.796364  628186 logs.go:284] No container was found matching "kindnet"
	I0210 13:52:32.796378  628186 logs.go:123] Gathering logs for kube-controller-manager [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2] ...
	I0210 13:52:32.796394  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:32.841966  628186 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:52:32.842005  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:52:33.189622  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 13:52:33.189666  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:52:33.306095  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 13:52:33.306136  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:52:33.324239  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:52:33.324271  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:52:33.425890  628186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:52:33.425920  628186 logs.go:123] Gathering logs for kube-apiserver [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15] ...
	I0210 13:52:33.425939  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:33.477628  628186 logs.go:123] Gathering logs for etcd [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a] ...
	I0210 13:52:33.477658  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:33.526168  628186 logs.go:123] Gathering logs for coredns [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb] ...
	I0210 13:52:33.526208  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:33.569003  628186 logs.go:123] Gathering logs for kube-scheduler [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f] ...
	I0210 13:52:33.569050  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:33.640835  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:33.640886  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:33.684508  628186 logs.go:123] Gathering logs for kube-proxy [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6] ...
	I0210 13:52:33.684548  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:33.731374  628186 logs.go:123] Gathering logs for container status ...
	I0210 13:52:33.731412  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:52:36.287260  628186 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0210 13:52:36.287984  628186 api_server.go:269] stopped: https://192.168.39.134:8443/healthz: Get "https://192.168.39.134:8443/healthz": dial tcp 192.168.39.134:8443: connect: connection refused
	I0210 13:52:36.288048  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:52:36.288108  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:52:36.326018  628186 cri.go:89] found id: "416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:36.326046  628186 cri.go:89] found id: ""
	I0210 13:52:36.326057  628186 logs.go:282] 1 containers: [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15]
	I0210 13:52:36.326125  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.330302  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:52:36.330360  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:52:36.371737  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:36.371769  628186 cri.go:89] found id: ""
	I0210 13:52:36.371780  628186 logs.go:282] 1 containers: [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a]
	I0210 13:52:36.371866  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.376289  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:52:36.376394  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:52:36.416392  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:36.416422  628186 cri.go:89] found id: ""
	I0210 13:52:36.416434  628186 logs.go:282] 1 containers: [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb]
	I0210 13:52:36.416495  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.421400  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:52:36.421476  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:52:36.458475  628186 cri.go:89] found id: "7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:36.458508  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:36.458514  628186 cri.go:89] found id: ""
	I0210 13:52:36.458524  628186 logs.go:282] 2 containers: [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a]
	I0210 13:52:36.458593  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.463218  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.468209  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:52:36.468289  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:52:36.514719  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:36.514748  628186 cri.go:89] found id: ""
	I0210 13:52:36.514762  628186 logs.go:282] 1 containers: [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6]
	I0210 13:52:36.514829  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.520540  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:52:36.520610  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:52:36.564543  628186 cri.go:89] found id: "3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:36.564581  628186 cri.go:89] found id: ""
	I0210 13:52:36.564594  628186 logs.go:282] 1 containers: [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2]
	I0210 13:52:36.564690  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:36.569927  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:52:36.570015  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:52:36.617413  628186 cri.go:89] found id: ""
	I0210 13:52:36.617452  628186 logs.go:282] 0 containers: []
	W0210 13:52:36.617464  628186 logs.go:284] No container was found matching "kindnet"
	I0210 13:52:36.617478  628186 logs.go:123] Gathering logs for kube-scheduler [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f] ...
	I0210 13:52:36.617496  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:36.694274  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:36.694317  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:36.739580  628186 logs.go:123] Gathering logs for kube-proxy [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6] ...
	I0210 13:52:36.739614  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:36.783252  628186 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:52:36.783303  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:52:32.509472  632910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739195552.499674975
	
	I0210 13:52:32.509499  632910 fix.go:216] guest clock: 1739195552.499674975
	I0210 13:52:32.509525  632910 fix.go:229] Guest: 2025-02-10 13:52:32.499674975 +0000 UTC Remote: 2025-02-10 13:52:32.388269216 +0000 UTC m=+9.969170882 (delta=111.405759ms)
	I0210 13:52:32.509574  632910 fix.go:200] guest clock delta is within tolerance: 111.405759ms
	I0210 13:52:32.509585  632910 start.go:83] releasing machines lock for "kubernetes-upgrade-935801", held for 7.340167515s
	I0210 13:52:32.509619  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:32.509935  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:52:32.513500  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.513917  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:32.513962  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.514318  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:32.514786  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:32.514972  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .DriverName
	I0210 13:52:32.515070  632910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:52:32.515118  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:32.515179  632910 ssh_runner.go:195] Run: cat /version.json
	I0210 13:52:32.515203  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHHostname
	I0210 13:52:32.518320  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.518361  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.518834  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:32.518869  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.519079  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:32.519152  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:32.519178  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:32.519255  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:32.519441  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:32.519443  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHPort
	I0210 13:52:32.519667  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHKeyPath
	I0210 13:52:32.519679  632910 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:52:32.519823  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetSSHUsername
	I0210 13:52:32.519948  632910 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/kubernetes-upgrade-935801/id_rsa Username:docker}
	I0210 13:52:32.633174  632910 ssh_runner.go:195] Run: systemctl --version
	I0210 13:52:32.640475  632910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:52:32.807974  632910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:52:32.820719  632910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:52:32.820808  632910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:52:32.834541  632910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0210 13:52:32.834576  632910 start.go:495] detecting cgroup driver to use...
	I0210 13:52:32.834656  632910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:52:32.856946  632910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:52:32.875847  632910 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:52:32.875917  632910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:52:32.896083  632910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:52:32.915626  632910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:52:33.109730  632910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:52:33.270023  632910 docker.go:233] disabling docker service ...
	I0210 13:52:33.270180  632910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:52:33.292177  632910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:52:33.307790  632910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:52:33.476337  632910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:52:33.649006  632910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:52:33.667966  632910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:52:33.696765  632910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:52:33.696838  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.708334  632910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:52:33.708413  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.719594  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.731434  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.743165  632910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:52:33.758669  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.773698  632910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.788372  632910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:52:33.800079  632910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:52:33.809890  632910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:52:33.819500  632910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:52:33.960150  632910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:52:38.380883  632910 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.420691282s)
	I0210 13:52:38.380919  632910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:52:38.380987  632910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:52:38.387981  632910 start.go:563] Will wait 60s for crictl version
	I0210 13:52:38.388050  632910 ssh_runner.go:195] Run: which crictl
	I0210 13:52:38.392152  632910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:52:38.433673  632910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:52:38.433843  632910 ssh_runner.go:195] Run: crio --version
	I0210 13:52:38.467121  632910 ssh_runner.go:195] Run: crio --version
	I0210 13:52:38.500799  632910 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:52:36.070622  632557 out.go:235]   - Booting up control plane ...
	I0210 13:52:36.070787  632557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:52:36.071648  632557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:52:36.072652  632557 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:52:36.091232  632557 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:52:36.098702  632557 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:52:36.098764  632557 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:52:36.222999  632557 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 13:52:36.223159  632557 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 13:52:37.223710  632557 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001205203s
	I0210 13:52:37.223862  632557 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 13:52:36.754683  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:39.248438  630631 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-9chq4" in "kube-system" namespace has status "Ready":"False"
	I0210 13:52:37.128469  628186 logs.go:123] Gathering logs for container status ...
	I0210 13:52:37.128511  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:52:37.173208  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 13:52:37.173247  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:52:37.190746  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:52:37.190783  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:52:37.276651  628186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:52:37.276676  628186 logs.go:123] Gathering logs for kube-apiserver [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15] ...
	I0210 13:52:37.276689  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:37.327299  628186 logs.go:123] Gathering logs for kube-controller-manager [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2] ...
	I0210 13:52:37.327339  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:37.387732  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 13:52:37.387767  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:52:37.496264  628186 logs.go:123] Gathering logs for etcd [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a] ...
	I0210 13:52:37.496323  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:37.546810  628186 logs.go:123] Gathering logs for coredns [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb] ...
	I0210 13:52:37.546847  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:40.096330  628186 api_server.go:253] Checking apiserver healthz at https://192.168.39.134:8443/healthz ...
	I0210 13:52:40.097120  628186 api_server.go:269] stopped: https://192.168.39.134:8443/healthz: Get "https://192.168.39.134:8443/healthz": dial tcp 192.168.39.134:8443: connect: connection refused
	I0210 13:52:40.097209  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:52:40.097277  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:52:40.149406  628186 cri.go:89] found id: "416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:40.149436  628186 cri.go:89] found id: ""
	I0210 13:52:40.149445  628186 logs.go:282] 1 containers: [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15]
	I0210 13:52:40.149509  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.155474  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:52:40.155609  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:52:40.203717  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:40.203747  628186 cri.go:89] found id: ""
	I0210 13:52:40.203758  628186 logs.go:282] 1 containers: [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a]
	I0210 13:52:40.203822  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.209914  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:52:40.210053  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:52:40.261218  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:40.261244  628186 cri.go:89] found id: ""
	I0210 13:52:40.261254  628186 logs.go:282] 1 containers: [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb]
	I0210 13:52:40.261322  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.265727  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:52:40.265797  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:52:40.303073  628186 cri.go:89] found id: "7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:40.303162  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:40.303173  628186 cri.go:89] found id: ""
	I0210 13:52:40.303183  628186 logs.go:282] 2 containers: [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a]
	I0210 13:52:40.303269  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.307910  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.312298  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:52:40.312367  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:52:40.351076  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:52:40.351107  628186 cri.go:89] found id: ""
	I0210 13:52:40.351117  628186 logs.go:282] 1 containers: [aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6]
	I0210 13:52:40.351182  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.356153  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:52:40.356228  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:52:40.408119  628186 cri.go:89] found id: "3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2"
	I0210 13:52:40.408226  628186 cri.go:89] found id: ""
	I0210 13:52:40.408250  628186 logs.go:282] 1 containers: [3c000495ce0b03031b9cc86dbc1614c2d72753aa5b5333577df6b42cb215f3d2]
	I0210 13:52:40.408363  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:52:40.413805  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:52:40.413935  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:52:40.466370  628186 cri.go:89] found id: ""
	I0210 13:52:40.466459  628186 logs.go:282] 0 containers: []
	W0210 13:52:40.466481  628186 logs.go:284] No container was found matching "kindnet"
	I0210 13:52:40.466503  628186 logs.go:123] Gathering logs for kube-apiserver [416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15] ...
	I0210 13:52:40.466553  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 416aaf85e0a92ddbd1faeba773391a0e7ea5d3324cae6545222c3fcab42efd15"
	I0210 13:52:40.522049  628186 logs.go:123] Gathering logs for etcd [af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a] ...
	I0210 13:52:40.522193  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:52:40.592290  628186 logs.go:123] Gathering logs for coredns [9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb] ...
	I0210 13:52:40.592337  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:52:40.648551  628186 logs.go:123] Gathering logs for kube-scheduler [7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f] ...
	I0210 13:52:40.648602  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e33f306735b9223cc680e4119dc5fe7ee8974769daecef2b887e603dc8e110f"
	I0210 13:52:40.732058  628186 logs.go:123] Gathering logs for kube-scheduler [6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a] ...
	I0210 13:52:40.732116  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:52:40.783506  628186 logs.go:123] Gathering logs for container status ...
	I0210 13:52:40.783552  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:52:40.916365  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 13:52:40.916412  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:52:41.078773  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 13:52:41.078825  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 13:52:41.098636  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:52:41.098684  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 13:52:38.502413  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) Calling .GetIP
	I0210 13:52:38.505532  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:38.505912  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:bd:cd", ip: ""} in network mk-kubernetes-upgrade-935801: {Iface:virbr3 ExpiryTime:2025-02-10 14:51:51 +0000 UTC Type:0 Mac:52:54:00:bc:bd:cd Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:kubernetes-upgrade-935801 Clientid:01:52:54:00:bc:bd:cd}
	I0210 13:52:38.505943  632910 main.go:141] libmachine: (kubernetes-upgrade-935801) DBG | domain kubernetes-upgrade-935801 has defined IP address 192.168.72.152 and MAC address 52:54:00:bc:bd:cd in network mk-kubernetes-upgrade-935801
	I0210 13:52:38.506140  632910 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 13:52:38.510846  632910 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-935801 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:52:38.510980  632910 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:52:38.511035  632910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:52:38.563971  632910 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:52:38.564007  632910 crio.go:433] Images already preloaded, skipping extraction
	I0210 13:52:38.564071  632910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:52:38.597314  632910 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:52:38.597344  632910 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:52:38.597354  632910 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.32.1 crio true true} ...
	I0210 13:52:38.597484  632910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-935801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-935801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:52:38.597573  632910 ssh_runner.go:195] Run: crio config
	I0210 13:52:38.658661  632910 cni.go:84] Creating CNI manager for ""
	I0210 13:52:38.658694  632910 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:52:38.658706  632910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:52:38.658737  632910 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-935801 NodeName:kubernetes-upgrade-935801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:52:38.658925  632910 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-935801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.152"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:52:38.659005  632910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:52:38.671044  632910 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:52:38.671129  632910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:52:38.684358  632910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0210 13:52:38.706086  632910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:52:38.730028  632910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0210 13:52:38.752066  632910 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0210 13:52:38.756653  632910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:52:38.909201  632910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:52:38.926186  632910 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801 for IP: 192.168.72.152
	I0210 13:52:38.926220  632910 certs.go:194] generating shared ca certs ...
	I0210 13:52:38.926263  632910 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:52:38.926465  632910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:52:38.926562  632910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:52:38.926587  632910 certs.go:256] generating profile certs ...
	I0210 13:52:38.926765  632910 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/client.key
	I0210 13:52:38.926847  632910 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key.0f2d8851
	I0210 13:52:38.926908  632910 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key
	I0210 13:52:38.927070  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:52:38.927139  632910 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:52:38.927154  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:52:38.927202  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:52:38.927235  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:52:38.927266  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:52:38.927337  632910 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:52:38.928365  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:52:38.953761  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:52:38.978361  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:52:39.008252  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:52:39.034369  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 13:52:39.064339  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:52:39.093637  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:52:39.122158  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kubernetes-upgrade-935801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:52:39.150125  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:52:39.180107  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:52:39.212265  632910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:52:39.241341  632910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:52:39.258752  632910 ssh_runner.go:195] Run: openssl version
	I0210 13:52:39.264998  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:52:39.275720  632910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:39.280513  632910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:39.280574  632910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:52:39.287413  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:52:39.300886  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:52:39.317241  632910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:52:39.323874  632910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:52:39.323926  632910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:52:39.331247  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:52:39.341458  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:52:39.354413  632910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:52:39.360881  632910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:52:39.360961  632910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:52:39.369150  632910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:52:39.382667  632910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:52:39.388925  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:52:39.396835  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:52:39.405045  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:52:39.411053  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:52:39.418516  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:52:39.426192  632910 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:52:39.433773  632910 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-935801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-935801 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:52:39.433877  632910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:52:39.433948  632910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:52:39.485516  632910 cri.go:89] found id: "15adc1f0ceb67aa358cfdcdbad36694cd0c86d5a62a3dde266be415c14084de1"
	I0210 13:52:39.485551  632910 cri.go:89] found id: "d03f4bd3188f3ae588d103fa4570e4466dde356eeefbcd8666c2462c2b00de5e"
	I0210 13:52:39.485557  632910 cri.go:89] found id: "f7f18cf5b78f4bec5dca48ad2a922dfa823349044c53e5c45a931c7d2ea67633"
	I0210 13:52:39.485567  632910 cri.go:89] found id: "d281a5b0c1b2505d03c462935c6ebff80529269e14e1bad841b353e06cfe2ce4"
	I0210 13:52:39.485571  632910 cri.go:89] found id: "80ce64fbaa7cac48db0ea80a21e4841aeca612c230a9b4b3a91391753c59b0e6"
	I0210 13:52:39.485576  632910 cri.go:89] found id: "8eba9a938759f5f86bbd5d395226cbcc6ee7e2b400297c376d334f9394348954"
	I0210 13:52:39.485580  632910 cri.go:89] found id: "705cf8fb95bf5564d4bf25d0fc6e52aaf4cbd67e25353ad410d3e0910fa79159"
	I0210 13:52:39.485584  632910 cri.go:89] found id: "8cb705917a18637205e58565247fa8f5168d74dbcd6324e70c6fef97e1dcd43e"
	I0210 13:52:39.485588  632910 cri.go:89] found id: ""
	I0210 13:52:39.485645  632910 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-935801 -n kubernetes-upgrade-935801
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-935801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-935801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-935801
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-935801: (1.183128468s)
--- FAIL: TestKubernetesUpgrade (385.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (838.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-145767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m56.86117959s)

                                                
                                                
-- stdout --
	* [pause-145767] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-145767" primary control-plane node in "pause-145767" cluster
	* Updating the running kvm2 "pause-145767" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:49:32.084438  628186 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:49:32.084580  628186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:49:32.084591  628186 out.go:358] Setting ErrFile to fd 2...
	I0210 13:49:32.084597  628186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:49:32.084915  628186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:49:32.085589  628186 out.go:352] Setting JSON to false
	I0210 13:49:32.086988  628186 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12717,"bootTime":1739182655,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:49:32.087132  628186 start.go:139] virtualization: kvm guest
	I0210 13:49:32.089373  628186 out.go:177] * [pause-145767] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:49:32.090736  628186 notify.go:220] Checking for updates...
	I0210 13:49:32.090771  628186 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:49:32.092241  628186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:49:32.093754  628186 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:49:32.095109  628186 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:49:32.096402  628186 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:49:32.097588  628186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:49:32.099472  628186 config.go:182] Loaded profile config "pause-145767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:49:32.100130  628186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:49:32.100206  628186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:49:32.123147  628186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0210 13:49:32.123780  628186 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:49:32.124540  628186 main.go:141] libmachine: Using API Version  1
	I0210 13:49:32.124567  628186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:49:32.125275  628186 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:49:32.125528  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:32.125848  628186 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:49:32.126317  628186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:49:32.126412  628186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:49:32.147999  628186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0210 13:49:32.148604  628186 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:49:32.149272  628186 main.go:141] libmachine: Using API Version  1
	I0210 13:49:32.149297  628186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:49:32.149771  628186 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:49:32.150020  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:32.191804  628186 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 13:49:32.193033  628186 start.go:297] selected driver: kvm2
	I0210 13:49:32.193058  628186 start.go:901] validating driver "kvm2" against &{Name:pause-145767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-145767 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:49:32.193241  628186 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:49:32.193607  628186 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:49:32.193689  628186 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:49:32.209360  628186 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:49:32.210053  628186 cni.go:84] Creating CNI manager for ""
	I0210 13:49:32.210106  628186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:49:32.210159  628186 start.go:340] cluster config:
	{Name:pause-145767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-145767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:
false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:49:32.210294  628186 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:49:32.212019  628186 out.go:177] * Starting "pause-145767" primary control-plane node in "pause-145767" cluster
	I0210 13:49:32.213281  628186 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:49:32.213333  628186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 13:49:32.213344  628186 cache.go:56] Caching tarball of preloaded images
	I0210 13:49:32.213443  628186 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:49:32.213460  628186 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 13:49:32.213618  628186 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/config.json ...
	I0210 13:49:32.213850  628186 start.go:360] acquireMachinesLock for pause-145767: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:49:32.213910  628186 start.go:364] duration metric: took 35.688µs to acquireMachinesLock for "pause-145767"
	I0210 13:49:32.213930  628186 start.go:96] Skipping create...Using existing machine configuration
	I0210 13:49:32.213938  628186 fix.go:54] fixHost starting: 
	I0210 13:49:32.214239  628186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:49:32.214280  628186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:49:32.230047  628186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0210 13:49:32.230640  628186 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:49:32.231207  628186 main.go:141] libmachine: Using API Version  1
	I0210 13:49:32.231230  628186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:49:32.231635  628186 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:49:32.231809  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:32.231995  628186 main.go:141] libmachine: (pause-145767) Calling .GetState
	I0210 13:49:32.234071  628186 fix.go:112] recreateIfNeeded on pause-145767: state=Running err=<nil>
	W0210 13:49:32.234097  628186 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 13:49:32.235554  628186 out.go:177] * Updating the running kvm2 "pause-145767" VM ...
	I0210 13:49:32.237106  628186 machine.go:93] provisionDockerMachine start ...
	I0210 13:49:32.237135  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:32.237362  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:32.240549  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.240912  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.240935  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.241142  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:32.241316  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.241454  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.241565  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:32.241700  628186 main.go:141] libmachine: Using SSH client type: native
	I0210 13:49:32.241892  628186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0210 13:49:32.241905  628186 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 13:49:32.373232  628186 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145767
	
	I0210 13:49:32.373289  628186 main.go:141] libmachine: (pause-145767) Calling .GetMachineName
	I0210 13:49:32.373581  628186 buildroot.go:166] provisioning hostname "pause-145767"
	I0210 13:49:32.373617  628186 main.go:141] libmachine: (pause-145767) Calling .GetMachineName
	I0210 13:49:32.373868  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:32.377512  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.378007  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.378068  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.378211  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:32.378410  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.378600  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.378773  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:32.378941  628186 main.go:141] libmachine: Using SSH client type: native
	I0210 13:49:32.379209  628186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0210 13:49:32.379232  628186 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-145767 && echo "pause-145767" | sudo tee /etc/hostname
	I0210 13:49:32.527248  628186 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145767
	
	I0210 13:49:32.527283  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:32.530603  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.531007  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.531039  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.531263  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:32.531491  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.531671  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.531886  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:32.532080  628186 main.go:141] libmachine: Using SSH client type: native
	I0210 13:49:32.532345  628186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0210 13:49:32.532370  628186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-145767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-145767/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-145767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:49:32.662888  628186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:49:32.662920  628186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:49:32.662957  628186 buildroot.go:174] setting up certificates
	I0210 13:49:32.662969  628186 provision.go:84] configureAuth start
	I0210 13:49:32.662984  628186 main.go:141] libmachine: (pause-145767) Calling .GetMachineName
	I0210 13:49:32.663294  628186 main.go:141] libmachine: (pause-145767) Calling .GetIP
	I0210 13:49:32.666640  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.667029  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.667081  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.667262  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:32.670122  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.670762  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.670795  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.670931  628186 provision.go:143] copyHostCerts
	I0210 13:49:32.671012  628186 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:49:32.671031  628186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:49:32.671104  628186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:49:32.671233  628186 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:49:32.671244  628186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:49:32.671271  628186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:49:32.671343  628186 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:49:32.671354  628186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:49:32.671378  628186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:49:32.671442  628186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.pause-145767 san=[127.0.0.1 192.168.39.134 localhost minikube pause-145767]
	I0210 13:49:32.840792  628186 provision.go:177] copyRemoteCerts
	I0210 13:49:32.840875  628186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:49:32.840915  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:32.844104  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.844784  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:32.844879  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:32.844980  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:32.845433  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:32.846918  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:32.847132  628186 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/pause-145767/id_rsa Username:docker}
	I0210 13:49:32.947011  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:49:32.979131  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 13:49:33.014714  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:49:33.059071  628186 provision.go:87] duration metric: took 396.084061ms to configureAuth
	I0210 13:49:33.059108  628186 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:49:33.059449  628186 config.go:182] Loaded profile config "pause-145767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:49:33.059571  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:33.062933  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:33.063438  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:33.063478  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:33.063832  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:33.064067  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:33.064328  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:33.064539  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:33.064757  628186 main.go:141] libmachine: Using SSH client type: native
	I0210 13:49:33.065001  628186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0210 13:49:33.065027  628186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:49:38.734901  628186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:49:38.735018  628186 machine.go:96] duration metric: took 6.497894808s to provisionDockerMachine
	I0210 13:49:38.735052  628186 start.go:293] postStartSetup for "pause-145767" (driver="kvm2")
	I0210 13:49:38.735091  628186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:49:38.735146  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:38.735698  628186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:49:38.735743  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:38.739191  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:38.739791  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:38.739815  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:38.740255  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:38.740529  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:38.740703  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:38.740911  628186 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/pause-145767/id_rsa Username:docker}
	I0210 13:49:38.843443  628186 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:49:38.849578  628186 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:49:38.849633  628186 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:49:38.849718  628186 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:49:38.849837  628186 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:49:38.849991  628186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:49:38.861702  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:49:38.896413  628186 start.go:296] duration metric: took 161.316996ms for postStartSetup
	I0210 13:49:38.896472  628186 fix.go:56] duration metric: took 6.682531848s for fixHost
	I0210 13:49:38.896504  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:38.900001  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:38.900437  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:38.900488  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:38.900781  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:38.901020  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:38.901249  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:38.901432  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:38.901638  628186 main.go:141] libmachine: Using SSH client type: native
	I0210 13:49:38.901906  628186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0210 13:49:38.901920  628186 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:49:39.035051  628186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739195379.025841082
	
	I0210 13:49:39.035080  628186 fix.go:216] guest clock: 1739195379.025841082
	I0210 13:49:39.035090  628186 fix.go:229] Guest: 2025-02-10 13:49:39.025841082 +0000 UTC Remote: 2025-02-10 13:49:38.896479128 +0000 UTC m=+6.875347646 (delta=129.361954ms)
	I0210 13:49:39.035137  628186 fix.go:200] guest clock delta is within tolerance: 129.361954ms
	I0210 13:49:39.035145  628186 start.go:83] releasing machines lock for "pause-145767", held for 6.821222299s
	I0210 13:49:39.035169  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:39.035472  628186 main.go:141] libmachine: (pause-145767) Calling .GetIP
	I0210 13:49:39.038821  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.039426  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:39.039451  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.039774  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:39.040399  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:39.040633  628186 main.go:141] libmachine: (pause-145767) Calling .DriverName
	I0210 13:49:39.040719  628186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:49:39.040766  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:39.041129  628186 ssh_runner.go:195] Run: cat /version.json
	I0210 13:49:39.041156  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHHostname
	I0210 13:49:39.044160  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.044626  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:39.044662  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.044855  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.045092  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:39.045379  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:49:39.045402  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:49:39.045432  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:39.045512  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHPort
	I0210 13:49:39.045690  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:39.045691  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHKeyPath
	I0210 13:49:39.045840  628186 main.go:141] libmachine: (pause-145767) Calling .GetSSHUsername
	I0210 13:49:39.045885  628186 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/pause-145767/id_rsa Username:docker}
	I0210 13:49:39.045973  628186 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/pause-145767/id_rsa Username:docker}
	I0210 13:49:39.174297  628186 ssh_runner.go:195] Run: systemctl --version
	I0210 13:49:39.284888  628186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:49:39.927193  628186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:49:39.956663  628186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:49:39.956771  628186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:49:40.015214  628186 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0210 13:49:40.015244  628186 start.go:495] detecting cgroup driver to use...
	I0210 13:49:40.015323  628186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:49:40.070981  628186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:49:40.156613  628186 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:49:40.156680  628186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:49:40.209452  628186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:49:40.281326  628186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:49:40.678940  628186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:49:40.903785  628186 docker.go:233] disabling docker service ...
	I0210 13:49:40.903854  628186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:49:40.944601  628186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:49:41.005276  628186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:49:41.232994  628186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:49:41.448912  628186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:49:41.472239  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:49:41.494773  628186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 13:49:41.494877  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.508998  628186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:49:41.509095  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.525923  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.543972  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.562238  628186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:49:41.588634  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.615516  628186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.629844  628186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:49:41.645151  628186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:49:41.670151  628186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:49:41.683234  628186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:49:41.889203  628186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:51:12.333080  628186 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.443832217s)
	I0210 13:51:12.333111  628186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:51:12.333168  628186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:51:12.343892  628186 start.go:563] Will wait 60s for crictl version
	I0210 13:51:12.343991  628186 ssh_runner.go:195] Run: which crictl
	I0210 13:51:12.350149  628186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:51:12.407815  628186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:51:12.407960  628186 ssh_runner.go:195] Run: crio --version
	I0210 13:51:12.442219  628186 ssh_runner.go:195] Run: crio --version
	I0210 13:51:12.482707  628186 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 13:51:12.484116  628186 main.go:141] libmachine: (pause-145767) Calling .GetIP
	I0210 13:51:12.488099  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:51:12.488718  628186 main.go:141] libmachine: (pause-145767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:b5:bd", ip: ""} in network mk-pause-145767: {Iface:virbr4 ExpiryTime:2025-02-10 14:48:49 +0000 UTC Type:0 Mac:52:54:00:ee:b5:bd Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:pause-145767 Clientid:01:52:54:00:ee:b5:bd}
	I0210 13:51:12.488762  628186 main.go:141] libmachine: (pause-145767) DBG | domain pause-145767 has defined IP address 192.168.39.134 and MAC address 52:54:00:ee:b5:bd in network mk-pause-145767
	I0210 13:51:12.489074  628186 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 13:51:12.495330  628186 kubeadm.go:883] updating cluster {Name:pause-145767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-145767 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portai
ner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:51:12.495476  628186 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 13:51:12.495522  628186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:51:12.544656  628186 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:51:12.544694  628186 crio.go:433] Images already preloaded, skipping extraction
	I0210 13:51:12.544756  628186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:51:12.586858  628186 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 13:51:12.586902  628186 cache_images.go:84] Images are preloaded, skipping loading
	I0210 13:51:12.586911  628186 kubeadm.go:934] updating node { 192.168.39.134 8443 v1.32.1 crio true true} ...
	I0210 13:51:12.587034  628186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-145767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-145767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:51:12.587101  628186 ssh_runner.go:195] Run: crio config
	I0210 13:51:12.640760  628186 cni.go:84] Creating CNI manager for ""
	I0210 13:51:12.640793  628186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:51:12.640808  628186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:51:12.640843  628186 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-145767 NodeName:pause-145767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 13:51:12.641059  628186 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-145767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.134"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.134"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:51:12.641183  628186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 13:51:12.653006  628186 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:51:12.653100  628186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:51:12.664081  628186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0210 13:51:12.685887  628186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:51:12.708393  628186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0210 13:51:12.731510  628186 ssh_runner.go:195] Run: grep 192.168.39.134	control-plane.minikube.internal$ /etc/hosts
	I0210 13:51:12.736040  628186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:51:12.902793  628186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:51:12.921067  628186 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767 for IP: 192.168.39.134
	I0210 13:51:12.921098  628186 certs.go:194] generating shared ca certs ...
	I0210 13:51:12.921134  628186 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:51:12.921332  628186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:51:12.921373  628186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:51:12.921384  628186 certs.go:256] generating profile certs ...
	I0210 13:51:12.921466  628186 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/client.key
	I0210 13:51:12.921522  628186 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/apiserver.key.384c666c
	I0210 13:51:12.921561  628186 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/proxy-client.key
	I0210 13:51:12.921659  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:51:12.921686  628186 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:51:12.921696  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:51:12.921717  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:51:12.921740  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:51:12.921764  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:51:12.921802  628186 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:51:12.922438  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:51:12.954113  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:51:12.987719  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:51:13.021937  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:51:13.056185  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 13:51:13.091556  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 13:51:13.123498  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:51:13.158455  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/pause-145767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 13:51:13.191151  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:51:13.221833  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:51:13.250950  628186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:51:13.283315  628186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:51:13.305590  628186 ssh_runner.go:195] Run: openssl version
	I0210 13:51:13.313982  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:51:13.327641  628186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:51:13.335492  628186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:51:13.335576  628186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:51:13.342546  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:51:13.353555  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:51:13.366350  628186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:51:13.372085  628186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:51:13.372175  628186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:51:13.381240  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:51:13.392833  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:51:13.410361  628186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:51:13.416023  628186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:51:13.416094  628186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:51:13.423017  628186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:51:13.434476  628186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:51:13.440346  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 13:51:13.448124  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 13:51:13.455581  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 13:51:13.463936  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 13:51:13.472215  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 13:51:13.481363  628186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 13:51:13.489698  628186 kubeadm.go:392] StartCluster: {Name:pause-145767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-145767 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:51:13.489893  628186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:51:13.489990  628186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:51:13.676482  628186 cri.go:89] found id: "9cb7a5b6383af7225f9e5add35c3d42b4bd79a26dd7442417ca76fab051114fb"
	I0210 13:51:13.676514  628186 cri.go:89] found id: "aca5b38a58cf7662e415c48746f453dbf7e970fcc821580a618605a1a3efe9d6"
	I0210 13:51:13.676519  628186 cri.go:89] found id: "8c60d8a2834414bc15e92f8bbc4ab96c4fcfa3a4c9044772f3af4cdf76e67625"
	I0210 13:51:13.676524  628186 cri.go:89] found id: "6780afb4cfb87664009720efa71417776e9075eca8c9ae23be32b8924cff61e2"
	I0210 13:51:13.676528  628186 cri.go:89] found id: "6f56fa606c4d66026797c0c29de63d624a2e5d986ed23c8c94cb3ebb9a474c5a"
	I0210 13:51:13.676533  628186 cri.go:89] found id: "e6388bd6154de5f846821eede70e488096b314b32931905198494bcc8789c8c0"
	I0210 13:51:13.676536  628186 cri.go:89] found id: "2e6df43ad43424091ff648e5e77f9d0d3204b89705667e640c377db856d3ea80"
	I0210 13:51:13.676540  628186 cri.go:89] found id: "51fb91d9d2de6bd421b1d59804cbdbcf72905961244ef39865c711f043956c6c"
	I0210 13:51:13.676544  628186 cri.go:89] found id: "8b4147f83a2ccdc5630573ef94951195db655c9f7ccf1516167fc1a9ed84e4a7"
	I0210 13:51:13.676558  628186 cri.go:89] found id: "1600067ed0745e4b8e35995f413b2f07bf844d3dd6937deaf068eb49b4a3b18e"
	I0210 13:51:13.676563  628186 cri.go:89] found id: "af002f7e276527951597887405f05e7c9fa9d9d3e144cfb630f9c0c08643f97a"
	I0210 13:51:13.676567  628186 cri.go:89] found id: ""
	I0210 13:51:13.676625  628186 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-145767 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio" : exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-145767 -n pause-145767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-145767 -n pause-145767: exit status 2 (249.586858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-145767 logs -n 25
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| start   | -p no-preload-264648                                   | no-preload-264648      | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:56 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo cat                              | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo cat                              | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo                                  | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo find                             | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-020784 sudo crio                             | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-020784                                       | bridge-020784          | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:55 UTC |
	| start   | -p embed-certs-963165                                  | embed-certs-963165     | jenkins | v1.35.0 | 10 Feb 25 13:55 UTC | 10 Feb 25 13:56 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-264648             | no-preload-264648      | jenkins | v1.35.0 | 10 Feb 25 13:56 UTC | 10 Feb 25 13:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-264648                                   | no-preload-264648      | jenkins | v1.35.0 | 10 Feb 25 13:56 UTC | 10 Feb 25 13:58 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-963165            | embed-certs-963165     | jenkins | v1.35.0 | 10 Feb 25 13:56 UTC | 10 Feb 25 13:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-963165                                  | embed-certs-963165     | jenkins | v1.35.0 | 10 Feb 25 13:56 UTC | 10 Feb 25 13:58 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-264648                  | no-preload-264648      | jenkins | v1.35.0 | 10 Feb 25 13:58 UTC | 10 Feb 25 13:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-264648                                   | no-preload-264648      | jenkins | v1.35.0 | 10 Feb 25 13:58 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-963165                 | embed-certs-963165     | jenkins | v1.35.0 | 10 Feb 25 13:58 UTC | 10 Feb 25 13:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-963165                                  | embed-certs-963165     | jenkins | v1.35.0 | 10 Feb 25 13:58 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-643105        | old-k8s-version-643105 | jenkins | v1.35.0 | 10 Feb 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-643105                              | old-k8s-version-643105 | jenkins | v1.35.0 | 10 Feb 25 14:00 UTC | 10 Feb 25 14:00 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-643105             | old-k8s-version-643105 | jenkins | v1.35.0 | 10 Feb 25 14:00 UTC | 10 Feb 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-643105                              | old-k8s-version-643105 | jenkins | v1.35.0 | 10 Feb 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 14:00:41
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 14:00:41.840976  644218 out.go:345] Setting OutFile to fd 1 ...
	I0210 14:00:41.841244  644218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:00:41.841254  644218 out.go:358] Setting ErrFile to fd 2...
	I0210 14:00:41.841258  644218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:00:41.841448  644218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 14:00:41.841985  644218 out.go:352] Setting JSON to false
	I0210 14:00:41.843021  644218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13387,"bootTime":1739182655,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 14:00:41.843087  644218 start.go:139] virtualization: kvm guest
	I0210 14:00:41.845249  644218 out.go:177] * [old-k8s-version-643105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 14:00:41.847199  644218 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 14:00:41.847096  644218 notify.go:220] Checking for updates...
	I0210 14:00:41.850042  644218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 14:00:41.851411  644218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:00:41.852668  644218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 14:00:41.853832  644218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 14:00:41.855061  644218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 14:00:41.856768  644218 config.go:182] Loaded profile config "old-k8s-version-643105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 14:00:41.857113  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.857187  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.872520  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0210 14:00:41.873024  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.873633  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.873656  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.873983  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.874226  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.875969  644218 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 14:00:41.877309  644218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 14:00:41.877801  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.877851  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.893162  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0210 14:00:41.893566  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.894098  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.894123  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.894424  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.894610  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.929538  644218 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 14:00:41.930719  644218 start.go:297] selected driver: kvm2
	I0210 14:00:41.930738  644218 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-6
43105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:00:41.930864  644218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 14:00:41.931823  644218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:00:41.931930  644218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 14:00:41.946635  644218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 14:00:41.947040  644218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:00:41.947074  644218 cni.go:84] Creating CNI manager for ""
	I0210 14:00:41.947123  644218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:00:41.947165  644218 start.go:340] cluster config:
	{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:00:41.947263  644218 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:00:41.949650  644218 out.go:177] * Starting "old-k8s-version-643105" primary control-plane node in "old-k8s-version-643105" cluster
	I0210 14:00:41.951045  644218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 14:00:41.951088  644218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 14:00:41.951103  644218 cache.go:56] Caching tarball of preloaded images
	I0210 14:00:41.951198  644218 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 14:00:41.951214  644218 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 14:00:41.951327  644218 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 14:00:41.951499  644218 start.go:360] acquireMachinesLock for old-k8s-version-643105: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 14:00:41.951543  644218 start.go:364] duration metric: took 25.67µs to acquireMachinesLock for "old-k8s-version-643105"
	I0210 14:00:41.951571  644218 start.go:96] Skipping create...Using existing machine configuration
	I0210 14:00:41.951579  644218 fix.go:54] fixHost starting: 
	I0210 14:00:41.951830  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.951874  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.965913  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0210 14:00:41.966370  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.966823  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.966844  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.967114  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.967305  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.967438  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetState
	I0210 14:00:41.968911  644218 fix.go:112] recreateIfNeeded on old-k8s-version-643105: state=Stopped err=<nil>
	I0210 14:00:41.968939  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	W0210 14:00:41.969085  644218 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 14:00:41.970891  644218 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-643105" ...
	I0210 14:00:41.119209  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:43.119272  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:42.481962  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:44.482499  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:41.971991  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .Start
	I0210 14:00:41.972215  644218 main.go:141] libmachine: (old-k8s-version-643105) starting domain...
	I0210 14:00:41.972236  644218 main.go:141] libmachine: (old-k8s-version-643105) ensuring networks are active...
	I0210 14:00:41.973021  644218 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network default is active
	I0210 14:00:41.973394  644218 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network mk-old-k8s-version-643105 is active
	I0210 14:00:41.973735  644218 main.go:141] libmachine: (old-k8s-version-643105) getting domain XML...
	I0210 14:00:41.974618  644218 main.go:141] libmachine: (old-k8s-version-643105) creating domain...
	I0210 14:00:43.237958  644218 main.go:141] libmachine: (old-k8s-version-643105) waiting for IP...
	I0210 14:00:43.238951  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.239391  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.239494  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.239388  644254 retry.go:31] will retry after 206.182886ms: waiting for domain to come up
	I0210 14:00:43.446966  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.447547  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.447574  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.447518  644254 retry.go:31] will retry after 329.362933ms: waiting for domain to come up
	I0210 14:00:43.777967  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.778519  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.778554  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.778477  644254 retry.go:31] will retry after 346.453199ms: waiting for domain to come up
	I0210 14:00:44.127152  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:44.127724  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:44.127781  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:44.127714  644254 retry.go:31] will retry after 369.587225ms: waiting for domain to come up
	I0210 14:00:44.499259  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:44.499894  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:44.499927  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:44.499829  644254 retry.go:31] will retry after 551.579789ms: waiting for domain to come up
	I0210 14:00:45.052851  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:45.053389  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:45.053422  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:45.053344  644254 retry.go:31] will retry after 842.776955ms: waiting for domain to come up
	I0210 14:00:45.897296  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:45.897745  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:45.897769  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:45.897724  644254 retry.go:31] will retry after 1.081690621s: waiting for domain to come up
	I0210 14:00:45.618107  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:47.619453  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:49.620701  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:46.981229  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:48.981715  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:46.980845  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:46.981454  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:46.981483  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:46.981421  644254 retry.go:31] will retry after 1.310681169s: waiting for domain to come up
	I0210 14:00:48.293826  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:48.294265  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:48.294298  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:48.294220  644254 retry.go:31] will retry after 1.237090549s: waiting for domain to come up
	I0210 14:00:49.533469  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:49.534006  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:49.534094  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:49.533968  644254 retry.go:31] will retry after 1.844597316s: waiting for domain to come up
	I0210 14:00:51.379889  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:51.380473  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:51.380503  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:51.380434  644254 retry.go:31] will retry after 2.170543895s: waiting for domain to come up
	I0210 14:00:52.119755  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:54.617729  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:51.482053  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:53.983530  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:53.553350  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:53.553858  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:53.553887  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:53.553814  644254 retry.go:31] will retry after 3.463243718s: waiting for domain to come up
	I0210 14:00:56.618165  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:58.618955  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:56.481501  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:58.980809  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:00:57.018476  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:57.018995  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:57.019016  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:57.018938  644254 retry.go:31] will retry after 2.849149701s: waiting for domain to come up
	I0210 14:00:59.871921  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.872407  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has current primary IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.872442  644218 main.go:141] libmachine: (old-k8s-version-643105) found domain IP: 192.168.72.78
	I0210 14:00:59.872459  644218 main.go:141] libmachine: (old-k8s-version-643105) reserving static IP address...
	I0210 14:00:59.872874  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "old-k8s-version-643105", mac: "52:54:00:de:ed:f5", ip: "192.168.72.78"} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:00:59.872912  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | skip adding static IP to network mk-old-k8s-version-643105 - found existing host DHCP lease matching {name: "old-k8s-version-643105", mac: "52:54:00:de:ed:f5", ip: "192.168.72.78"}
	I0210 14:00:59.872926  644218 main.go:141] libmachine: (old-k8s-version-643105) reserved static IP address 192.168.72.78 for domain old-k8s-version-643105
	I0210 14:00:59.872949  644218 main.go:141] libmachine: (old-k8s-version-643105) waiting for SSH...
	I0210 14:00:59.872967  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Getting to WaitForSSH function...
	I0210 14:00:59.874962  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.875311  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:00:59.875344  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.875469  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH client type: external
	I0210 14:00:59.875491  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa (-rw-------)
	I0210 14:00:59.875537  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 14:00:59.875555  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | About to run SSH command:
	I0210 14:00:59.875568  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | exit 0
	I0210 14:00:59.996273  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | SSH cmd err, output: <nil>: 
	I0210 14:00:59.996664  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetConfigRaw
	I0210 14:00:59.997452  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:00:59.999899  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.000417  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.000441  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.000725  644218 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 14:01:00.000950  644218 machine.go:93] provisionDockerMachine start ...
	I0210 14:01:00.000973  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:00.001218  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.003616  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.003975  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.004009  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.004135  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.004346  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.004533  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.004647  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.004837  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.005071  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.005083  644218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 14:01:00.104866  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 14:01:00.104903  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.105187  644218 buildroot.go:166] provisioning hostname "old-k8s-version-643105"
	I0210 14:01:00.105215  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.105403  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.108197  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.108678  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.108707  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.108836  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.109038  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.109213  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.109374  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.109547  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.109792  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.109807  644218 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-643105 && echo "old-k8s-version-643105" | sudo tee /etc/hostname
	I0210 14:01:00.227428  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-643105
	
	I0210 14:01:00.227461  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.230205  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.230529  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.230560  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.230756  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.230987  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.231161  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.231272  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.231422  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.231655  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.231680  644218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-643105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-643105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-643105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 14:01:00.346932  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 14:01:00.346964  644218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 14:01:00.347020  644218 buildroot.go:174] setting up certificates
	I0210 14:01:00.347031  644218 provision.go:84] configureAuth start
	I0210 14:01:00.347041  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.347306  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:00.350130  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.350530  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.350567  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.350764  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.353240  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.353564  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.353610  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.353714  644218 provision.go:143] copyHostCerts
	I0210 14:01:00.353795  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 14:01:00.353810  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 14:01:00.353892  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 14:01:00.354042  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 14:01:00.354055  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 14:01:00.354100  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 14:01:00.354190  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 14:01:00.354200  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 14:01:00.354235  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 14:01:00.354321  644218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-643105 san=[127.0.0.1 192.168.72.78 localhost minikube old-k8s-version-643105]
	I0210 14:01:00.582524  644218 provision.go:177] copyRemoteCerts
	I0210 14:01:00.582605  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 14:01:00.582641  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.585672  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.586128  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.586164  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.586335  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.586557  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.586701  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.586806  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:00.667733  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 14:01:00.694010  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 14:01:00.719848  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 14:01:00.745526  644218 provision.go:87] duration metric: took 398.480071ms to configureAuth
	I0210 14:01:00.745561  644218 buildroot.go:189] setting minikube options for container-runtime
	I0210 14:01:00.745788  644218 config.go:182] Loaded profile config "old-k8s-version-643105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 14:01:00.745891  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.748846  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.749225  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.749256  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.749467  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.749682  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.749863  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.749997  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.750138  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.750322  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.750341  644218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 14:01:00.990441  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 14:01:00.990484  644218 machine.go:96] duration metric: took 989.502089ms to provisionDockerMachine
	I0210 14:01:00.990496  644218 start.go:293] postStartSetup for "old-k8s-version-643105" (driver="kvm2")
	I0210 14:01:00.990509  644218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 14:01:00.990526  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:00.990830  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 14:01:00.990865  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.993504  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.993870  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.993909  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.994111  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.994281  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.994462  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.994624  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.076590  644218 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 14:01:01.081371  644218 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 14:01:01.081401  644218 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 14:01:01.081474  644218 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 14:01:01.081597  644218 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 14:01:01.081759  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 14:01:01.091951  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:01:01.117344  644218 start.go:296] duration metric: took 126.828836ms for postStartSetup
	I0210 14:01:01.117395  644218 fix.go:56] duration metric: took 19.165814332s for fixHost
	I0210 14:01:01.117426  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.120411  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.120784  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.120826  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.120963  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.121266  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.121451  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.121603  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.121806  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:01.121987  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:01.122000  644218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 14:01:01.225245  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739196061.196371401
	
	I0210 14:01:01.225274  644218 fix.go:216] guest clock: 1739196061.196371401
	I0210 14:01:01.225284  644218 fix.go:229] Guest: 2025-02-10 14:01:01.196371401 +0000 UTC Remote: 2025-02-10 14:01:01.117401189 +0000 UTC m=+19.314698018 (delta=78.970212ms)
	I0210 14:01:01.225307  644218 fix.go:200] guest clock delta is within tolerance: 78.970212ms
	I0210 14:01:01.225312  644218 start.go:83] releasing machines lock for "old-k8s-version-643105", held for 19.273758703s
	I0210 14:01:01.225331  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.225635  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:01.228728  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.229154  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.229184  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.229307  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.229831  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.230027  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.230136  644218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 14:01:01.230183  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.230279  644218 ssh_runner.go:195] Run: cat /version.json
	I0210 14:01:01.230308  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.232882  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233201  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.233244  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233265  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233380  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.233549  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.233732  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.233765  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.233760  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233914  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.233972  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.234062  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.234210  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.234379  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.309536  644218 ssh_runner.go:195] Run: systemctl --version
	I0210 14:01:01.334102  644218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 14:01:01.486141  644218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 14:01:01.492934  644218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 14:01:01.493017  644218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 14:01:01.512726  644218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 14:01:01.512760  644218 start.go:495] detecting cgroup driver to use...
	I0210 14:01:01.512824  644218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 14:01:01.530256  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 14:01:01.545115  644218 docker.go:217] disabling cri-docker service (if available) ...
	I0210 14:01:01.545186  644218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 14:01:01.563057  644218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 14:01:01.578117  644218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 14:01:01.694843  644218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 14:01:01.827391  644218 docker.go:233] disabling docker service ...
	I0210 14:01:01.827476  644218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 14:01:01.843342  644218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 14:01:01.857886  644218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 14:01:01.992715  644218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 14:01:02.114653  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 14:01:02.129432  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 14:01:02.149788  644218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 14:01:02.149895  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.161677  644218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 14:01:02.161759  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.172851  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.183669  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.194818  644218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 14:01:02.205759  644218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 14:01:02.215660  644218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 14:01:02.215706  644218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 14:01:02.230109  644218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 14:01:02.240154  644218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:01:02.371171  644218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 14:01:02.470149  644218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 14:01:02.470240  644218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 14:01:02.475602  644218 start.go:563] Will wait 60s for crictl version
	I0210 14:01:02.475664  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:02.480049  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 14:01:02.520068  644218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 14:01:02.520185  644218 ssh_runner.go:195] Run: crio --version
	I0210 14:01:02.551045  644218 ssh_runner.go:195] Run: crio --version
	I0210 14:01:02.580931  644218 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 14:01:01.120193  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:03.619745  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:00.983117  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:03.485429  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:02.582157  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:02.584852  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:02.585284  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:02.585304  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:02.585561  644218 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 14:01:02.590450  644218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:01:02.604324  644218 kubeadm.go:883] updating cluster {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 14:01:02.604467  644218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 14:01:02.604516  644218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:01:02.652623  644218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 14:01:02.652686  644218 ssh_runner.go:195] Run: which lz4
	I0210 14:01:02.656943  644218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 14:01:02.661500  644218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 14:01:02.661534  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 14:01:04.339580  644218 crio.go:462] duration metric: took 1.682671792s to copy over tarball
	I0210 14:01:04.339684  644218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 14:01:06.118621  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:08.119039  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:05.982273  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:08.481105  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:07.350309  644218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.010577091s)
	I0210 14:01:07.350351  644218 crio.go:469] duration metric: took 3.010729902s to extract the tarball
	I0210 14:01:07.350361  644218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 14:01:07.395580  644218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:01:07.429452  644218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 14:01:07.429482  644218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 14:01:07.429570  644218 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:07.429600  644218 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.429606  644218 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.429571  644218 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.429634  644218 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.429647  644218 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.429597  644218 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.429724  644218 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 14:01:07.431438  644218 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.431487  644218 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.431493  644218 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.431504  644218 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 14:01:07.431438  644218 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:07.431443  644218 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.431511  644218 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.431613  644218 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.615291  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 14:01:07.623050  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.638086  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.652614  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.659368  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.667259  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.674953  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.742829  644218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 14:01:07.742919  644218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.742979  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.743280  644218 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 14:01:07.743320  644218 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 14:01:07.743365  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.783732  644218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 14:01:07.783792  644218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.783839  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.825251  644218 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 14:01:07.825316  644218 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.825371  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.831958  644218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 14:01:07.832006  644218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.832057  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832062  644218 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 14:01:07.832097  644218 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.832099  644218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 14:01:07.832131  644218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.832142  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832161  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.832167  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832168  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:07.832201  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.832291  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.836691  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.942019  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.947733  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.947838  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.955245  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.955328  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.955351  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:07.960018  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:08.070966  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:08.126839  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:08.126913  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:08.127415  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:08.131979  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:08.132020  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:08.132080  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:08.209596  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 14:01:08.267603  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 14:01:08.269564  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:08.275411  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 14:01:08.282282  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 14:01:08.294152  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 14:01:08.294240  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:08.325700  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 14:01:08.345419  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 14:01:08.523550  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:08.667959  644218 cache_images.go:92] duration metric: took 1.238457309s to LoadCachedImages
	W0210 14:01:08.668089  644218 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0210 14:01:08.668109  644218 kubeadm.go:934] updating node { 192.168.72.78 8443 v1.20.0 crio true true} ...
	I0210 14:01:08.668302  644218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-643105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 14:01:08.668409  644218 ssh_runner.go:195] Run: crio config
	I0210 14:01:08.722011  644218 cni.go:84] Creating CNI manager for ""
	I0210 14:01:08.722036  644218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:01:08.722084  644218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 14:01:08.722108  644218 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.78 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-643105 NodeName:old-k8s-version-643105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 14:01:08.722252  644218 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-643105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 14:01:08.722318  644218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 14:01:08.733118  644218 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 14:01:08.733210  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 14:01:08.743915  644218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0210 14:01:08.763793  644218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 14:01:08.783491  644218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0210 14:01:08.803659  644218 ssh_runner.go:195] Run: grep 192.168.72.78	control-plane.minikube.internal$ /etc/hosts
	I0210 14:01:08.808218  644218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:01:08.822404  644218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:01:08.942076  644218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:01:08.960541  644218 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105 for IP: 192.168.72.78
	I0210 14:01:08.960571  644218 certs.go:194] generating shared ca certs ...
	I0210 14:01:08.960594  644218 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:01:08.960813  644218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 14:01:08.960874  644218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 14:01:08.960887  644218 certs.go:256] generating profile certs ...
	I0210 14:01:08.961019  644218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.key
	I0210 14:01:08.961097  644218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7
	I0210 14:01:08.961152  644218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key
	I0210 14:01:08.961318  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 14:01:08.961360  644218 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 14:01:08.961375  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 14:01:08.961405  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 14:01:08.961438  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 14:01:08.961471  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 14:01:08.961526  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:01:08.962236  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 14:01:09.002999  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 14:01:09.042607  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 14:01:09.078020  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 14:01:09.105717  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 14:01:09.132990  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 14:01:09.159931  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 14:01:09.188143  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 14:01:09.227520  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 14:01:09.257228  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 14:01:09.282623  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 14:01:09.306810  644218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 14:01:09.325730  644218 ssh_runner.go:195] Run: openssl version
	I0210 14:01:09.332234  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 14:01:09.346330  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.351353  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.351419  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.358262  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 14:01:09.370517  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 14:01:09.382204  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.386897  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.386964  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.392847  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 14:01:09.404611  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 14:01:09.416794  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.421929  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.422001  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.428502  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 14:01:09.440486  644218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 14:01:09.445440  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 14:01:09.451749  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 14:01:09.458986  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 14:01:09.465394  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 14:01:09.472248  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 14:01:09.479629  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 14:01:09.486700  644218 kubeadm.go:392] StartCluster: {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:01:09.486817  644218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 14:01:09.486888  644218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:01:09.527393  644218 cri.go:89] found id: ""
	I0210 14:01:09.527468  644218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 14:01:09.538292  644218 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 14:01:09.538316  644218 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 14:01:09.538361  644218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 14:01:09.548788  644218 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 14:01:09.549897  644218 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:01:09.550478  644218 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-643105" cluster setting kubeconfig missing "old-k8s-version-643105" context setting]
	I0210 14:01:09.551355  644218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:01:09.595572  644218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 14:01:09.608048  644218 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.78
	I0210 14:01:09.608087  644218 kubeadm.go:1160] stopping kube-system containers ...
	I0210 14:01:09.608107  644218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 14:01:09.608167  644218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:01:09.652676  644218 cri.go:89] found id: ""
	I0210 14:01:09.652766  644218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 14:01:09.670953  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:01:09.683380  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:01:09.683403  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:01:09.683452  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:01:09.694551  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:01:09.694611  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:01:09.705237  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:01:09.715066  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:01:09.715145  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:01:09.726566  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:01:09.737269  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:01:09.737352  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:01:09.748364  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:01:09.760127  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:01:09.760192  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:01:09.772077  644218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:01:09.782590  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:09.933455  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:10.817736  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.047055  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.146436  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.243309  644218 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:01:11.243404  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:11.744192  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:10.617662  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:13.118565  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:10.481729  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:12.981304  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:12.244363  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:12.743801  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:13.243553  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:13.744474  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:14.243523  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:14.744173  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:15.243867  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:15.743694  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:16.244417  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:16.743628  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:15.617730  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:17.617959  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:15.480892  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:17.481373  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:19.981022  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:17.244040  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:17.744421  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:18.244035  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:18.744414  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:19.244475  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:19.743804  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:20.244513  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:20.743606  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:21.244269  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:21.744442  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:20.118879  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:22.618399  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:24.618749  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:21.981129  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:23.981802  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:22.244379  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:22.743484  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:23.243994  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:23.744178  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:24.244394  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:24.744175  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:25.244420  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:25.744476  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:26.243537  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:26.744334  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:27.117396  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:29.118226  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:25.982189  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:28.480587  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:27.244400  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:27.743573  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:28.244521  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:28.743721  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:29.244304  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:29.744265  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:30.243673  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:30.744121  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:31.243493  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:31.744306  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:31.618579  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:34.117406  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:30.981084  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:32.981233  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:34.981412  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:32.244304  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:32.743525  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:33.244550  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:33.743639  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:34.244395  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:34.744112  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:35.244321  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:35.743570  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:36.244179  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:36.744400  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:36.117802  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:38.118196  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:37.481105  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:39.481340  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:37.244130  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:37.743892  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:38.243746  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:38.743772  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:39.244330  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:39.743916  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:40.243566  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:40.743846  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:41.243608  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:41.743950  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:40.118499  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:42.618216  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:41.981140  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:43.981343  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:42.244397  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:42.744118  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:43.244417  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:43.744172  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:44.243711  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:44.743862  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:45.243727  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:45.743873  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:46.244115  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:46.743788  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:45.118117  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:47.118944  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:49.617531  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:46.479917  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:48.482485  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:47.244429  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:47.743614  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:48.244349  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:48.743552  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:49.243815  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:49.744369  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:50.243839  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:50.743533  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:51.244507  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:51.744137  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:51.619150  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:54.117237  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:50.981743  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:53.481393  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:52.244106  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:52.744366  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:53.244035  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:53.744155  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:54.243661  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:54.744106  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:55.244495  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:55.744433  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:56.244154  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:56.744508  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:56.118795  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:58.617498  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:55.481567  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:57.982124  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:59.982454  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:01:57.244475  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:57.743886  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:58.243572  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:58.744414  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:59.244367  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:59.743561  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:00.243790  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:00.743903  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:01.243740  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:01.744269  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:00.618171  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:03.117773  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:02.480944  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:04.982061  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:02.244119  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:02.743871  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:03.243921  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:03.744410  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:04.243622  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:04.744443  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:05.244122  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:05.744007  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:06.244161  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:06.743692  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:05.118383  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:07.617630  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:09.618582  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:07.480897  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:09.481803  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:07.244335  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:07.743959  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:08.243492  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:08.743587  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:09.244176  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:09.744483  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:10.243822  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:10.744008  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:11.244385  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:11.244471  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:11.286364  644218 cri.go:89] found id: ""
	I0210 14:02:11.286393  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.286405  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:11.286417  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:11.286475  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:11.329994  644218 cri.go:89] found id: ""
	I0210 14:02:11.330022  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.330051  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:11.330059  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:11.330138  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:11.367663  644218 cri.go:89] found id: ""
	I0210 14:02:11.367695  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.367705  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:11.367712  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:11.367768  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:11.403264  644218 cri.go:89] found id: ""
	I0210 14:02:11.403304  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.403316  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:11.403325  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:11.403394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:11.440492  644218 cri.go:89] found id: ""
	I0210 14:02:11.440526  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.440538  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:11.440547  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:11.440613  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:11.476373  644218 cri.go:89] found id: ""
	I0210 14:02:11.476405  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.476415  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:11.476423  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:11.476488  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:11.514207  644218 cri.go:89] found id: ""
	I0210 14:02:11.514240  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.514248  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:11.514255  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:11.514306  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:11.549693  644218 cri.go:89] found id: ""
	I0210 14:02:11.549728  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.549739  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:11.549759  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:11.549776  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:11.562981  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:11.563007  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:11.693788  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:11.693815  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:11.693828  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:11.764272  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:11.764318  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:11.806070  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:11.806099  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:12.118465  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:14.620913  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:11.482090  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:13.482425  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:14.358810  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:14.372745  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:14.372832  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:14.409693  644218 cri.go:89] found id: ""
	I0210 14:02:14.409725  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.409736  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:14.409746  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:14.409824  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:14.453067  644218 cri.go:89] found id: ""
	I0210 14:02:14.453102  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.453111  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:14.453118  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:14.453203  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:14.492519  644218 cri.go:89] found id: ""
	I0210 14:02:14.492546  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.492554  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:14.492560  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:14.492640  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:14.529288  644218 cri.go:89] found id: ""
	I0210 14:02:14.529322  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.529332  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:14.529340  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:14.529408  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:14.575092  644218 cri.go:89] found id: ""
	I0210 14:02:14.575123  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.575132  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:14.575138  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:14.575211  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:14.621654  644218 cri.go:89] found id: ""
	I0210 14:02:14.621679  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.621690  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:14.621699  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:14.621761  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:14.664478  644218 cri.go:89] found id: ""
	I0210 14:02:14.664506  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.664513  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:14.664519  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:14.664572  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:14.710019  644218 cri.go:89] found id: ""
	I0210 14:02:14.710054  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.710063  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:14.710073  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:14.710087  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:14.762929  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:14.762970  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:14.776939  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:14.776968  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:14.848342  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:14.848365  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:14.848381  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:14.922486  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:14.922535  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:17.117881  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:19.617311  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:15.981791  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:17.982417  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:17.466274  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:17.480332  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:17.480412  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:17.518257  644218 cri.go:89] found id: ""
	I0210 14:02:17.518290  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.518302  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:17.518311  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:17.518372  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:17.553779  644218 cri.go:89] found id: ""
	I0210 14:02:17.553806  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.553814  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:17.553826  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:17.553882  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:17.595478  644218 cri.go:89] found id: ""
	I0210 14:02:17.595529  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.595538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:17.595545  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:17.595615  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:17.632549  644218 cri.go:89] found id: ""
	I0210 14:02:17.632574  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.632582  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:17.632588  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:17.632650  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:17.667748  644218 cri.go:89] found id: ""
	I0210 14:02:17.667779  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.667788  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:17.667794  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:17.667867  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:17.702855  644218 cri.go:89] found id: ""
	I0210 14:02:17.702891  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.702903  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:17.702911  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:17.702980  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:17.735604  644218 cri.go:89] found id: ""
	I0210 14:02:17.735635  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.735644  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:17.735651  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:17.735718  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:17.770407  644218 cri.go:89] found id: ""
	I0210 14:02:17.770441  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.770465  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:17.770479  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:17.770505  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:17.850219  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:17.850247  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:17.850266  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:17.930615  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:17.930665  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:17.976840  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:17.976878  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:18.030287  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:18.030334  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:20.547098  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:20.568343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:20.568418  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:20.617065  644218 cri.go:89] found id: ""
	I0210 14:02:20.617117  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.617129  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:20.617142  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:20.617216  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:20.666206  644218 cri.go:89] found id: ""
	I0210 14:02:20.666242  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.666254  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:20.666261  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:20.666342  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:20.702778  644218 cri.go:89] found id: ""
	I0210 14:02:20.702813  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.702826  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:20.702834  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:20.702894  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:20.738798  644218 cri.go:89] found id: ""
	I0210 14:02:20.738825  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.738835  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:20.738844  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:20.738916  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:20.779218  644218 cri.go:89] found id: ""
	I0210 14:02:20.779251  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.779270  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:20.779279  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:20.779347  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:20.817485  644218 cri.go:89] found id: ""
	I0210 14:02:20.817519  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.817535  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:20.817546  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:20.817620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:20.853588  644218 cri.go:89] found id: ""
	I0210 14:02:20.853622  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.853672  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:20.853679  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:20.853738  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:20.889051  644218 cri.go:89] found id: ""
	I0210 14:02:20.889088  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.889120  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:20.889134  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:20.889148  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:20.940039  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:20.940084  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:20.954579  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:20.954608  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:21.024304  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:21.024332  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:21.024346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:21.101726  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:21.101774  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:21.619501  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:24.117270  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:20.481629  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:22.981271  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:24.981983  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:23.647432  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:23.660624  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:23.660713  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:23.701064  644218 cri.go:89] found id: ""
	I0210 14:02:23.701094  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.701102  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:23.701108  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:23.701162  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:23.735230  644218 cri.go:89] found id: ""
	I0210 14:02:23.735258  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.735266  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:23.735272  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:23.735328  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:23.770242  644218 cri.go:89] found id: ""
	I0210 14:02:23.770273  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.770282  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:23.770291  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:23.770361  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:23.807768  644218 cri.go:89] found id: ""
	I0210 14:02:23.807802  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.807815  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:23.807823  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:23.807896  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:23.844969  644218 cri.go:89] found id: ""
	I0210 14:02:23.845006  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.845018  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:23.845032  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:23.845105  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:23.880080  644218 cri.go:89] found id: ""
	I0210 14:02:23.880119  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.880131  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:23.880138  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:23.880217  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:23.926799  644218 cri.go:89] found id: ""
	I0210 14:02:23.926835  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.926843  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:23.926850  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:23.926907  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:23.967286  644218 cri.go:89] found id: ""
	I0210 14:02:23.967320  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.967332  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:23.967347  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:23.967364  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:24.045745  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:24.045798  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:24.089243  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:24.089276  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:24.138300  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:24.138342  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:24.154534  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:24.154582  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:24.227255  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:26.728927  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:26.743363  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:26.743447  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:26.779325  644218 cri.go:89] found id: ""
	I0210 14:02:26.779362  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.779375  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:26.779383  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:26.779450  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:26.816861  644218 cri.go:89] found id: ""
	I0210 14:02:26.816894  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.816906  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:26.816952  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:26.817029  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:26.118558  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:28.618320  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:27.480964  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:29.981948  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:26.860520  644218 cri.go:89] found id: ""
	I0210 14:02:26.860552  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.860561  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:26.860568  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:26.860637  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:26.898009  644218 cri.go:89] found id: ""
	I0210 14:02:26.898044  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.898055  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:26.898064  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:26.898136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:26.931901  644218 cri.go:89] found id: ""
	I0210 14:02:26.931939  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.931958  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:26.931968  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:26.932045  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:26.975597  644218 cri.go:89] found id: ""
	I0210 14:02:26.975625  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.975633  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:26.975640  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:26.975695  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:27.012995  644218 cri.go:89] found id: ""
	I0210 14:02:27.013029  644218 logs.go:282] 0 containers: []
	W0210 14:02:27.013040  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:27.013048  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:27.013116  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:27.050318  644218 cri.go:89] found id: ""
	I0210 14:02:27.050346  644218 logs.go:282] 0 containers: []
	W0210 14:02:27.050354  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:27.050364  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:27.050377  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:27.102947  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:27.102983  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:27.117768  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:27.117815  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:27.186683  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:27.186707  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:27.186721  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:27.267129  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:27.267166  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:29.811859  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:29.825046  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:29.825142  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:29.861264  644218 cri.go:89] found id: ""
	I0210 14:02:29.861303  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.861316  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:29.861324  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:29.861397  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:29.900434  644218 cri.go:89] found id: ""
	I0210 14:02:29.900464  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.900472  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:29.900479  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:29.900542  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:29.937412  644218 cri.go:89] found id: ""
	I0210 14:02:29.937442  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.937454  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:29.937461  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:29.937545  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:29.978051  644218 cri.go:89] found id: ""
	I0210 14:02:29.978082  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.978092  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:29.978099  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:29.978166  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:30.017678  644218 cri.go:89] found id: ""
	I0210 14:02:30.017766  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.017782  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:30.017791  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:30.017860  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:30.059305  644218 cri.go:89] found id: ""
	I0210 14:02:30.059336  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.059346  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:30.059355  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:30.059425  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:30.096690  644218 cri.go:89] found id: ""
	I0210 14:02:30.096736  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.096748  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:30.096757  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:30.096829  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:30.132812  644218 cri.go:89] found id: ""
	I0210 14:02:30.132846  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.132855  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:30.132866  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:30.132883  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:30.186166  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:30.186208  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:30.202789  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:30.202827  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:30.278004  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:30.278031  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:30.278049  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:30.366990  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:30.367030  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:30.618700  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:33.119204  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:31.982119  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:34.481206  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:32.908509  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:32.921779  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:32.921856  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:32.962265  644218 cri.go:89] found id: ""
	I0210 14:02:32.962300  644218 logs.go:282] 0 containers: []
	W0210 14:02:32.962311  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:32.962319  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:32.962388  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:32.996492  644218 cri.go:89] found id: ""
	I0210 14:02:32.996524  644218 logs.go:282] 0 containers: []
	W0210 14:02:32.996537  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:32.996544  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:32.996611  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:33.033211  644218 cri.go:89] found id: ""
	I0210 14:02:33.033251  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.033265  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:33.033274  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:33.033345  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:33.067479  644218 cri.go:89] found id: ""
	I0210 14:02:33.067517  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.067528  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:33.067537  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:33.067631  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:33.105719  644218 cri.go:89] found id: ""
	I0210 14:02:33.105750  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.105761  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:33.105768  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:33.105836  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:33.145033  644218 cri.go:89] found id: ""
	I0210 14:02:33.145060  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.145067  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:33.145084  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:33.145135  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:33.180968  644218 cri.go:89] found id: ""
	I0210 14:02:33.180994  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.181003  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:33.181013  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:33.181071  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:33.216463  644218 cri.go:89] found id: ""
	I0210 14:02:33.216488  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.216497  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:33.216507  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:33.216527  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:33.229839  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:33.229873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:33.302667  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:33.302694  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:33.302712  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:33.380724  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:33.380767  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:33.422940  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:33.422974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:35.980433  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:35.993639  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:35.993721  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:36.031302  644218 cri.go:89] found id: ""
	I0210 14:02:36.031338  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.031351  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:36.031360  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:36.031418  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:36.064362  644218 cri.go:89] found id: ""
	I0210 14:02:36.064396  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.064408  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:36.064417  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:36.064474  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:36.099393  644218 cri.go:89] found id: ""
	I0210 14:02:36.099422  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.099431  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:36.099438  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:36.099506  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:36.135921  644218 cri.go:89] found id: ""
	I0210 14:02:36.135952  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.135963  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:36.135972  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:36.136024  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:36.178044  644218 cri.go:89] found id: ""
	I0210 14:02:36.178073  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.178083  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:36.178091  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:36.178151  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:36.213320  644218 cri.go:89] found id: ""
	I0210 14:02:36.213350  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.213362  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:36.213369  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:36.213442  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:36.251431  644218 cri.go:89] found id: ""
	I0210 14:02:36.251457  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.251465  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:36.251474  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:36.251543  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:36.286389  644218 cri.go:89] found id: ""
	I0210 14:02:36.286421  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.286432  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:36.286446  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:36.286463  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:36.300293  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:36.300323  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:36.373240  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:36.373265  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:36.373283  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:36.455529  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:36.455574  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:36.497953  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:36.497994  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:35.617744  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:38.117255  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:36.483008  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:38.981061  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:39.051048  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:39.063906  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:39.064003  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:39.097633  644218 cri.go:89] found id: ""
	I0210 14:02:39.097669  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.097681  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:39.097690  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:39.097759  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:39.133312  644218 cri.go:89] found id: ""
	I0210 14:02:39.133341  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.133353  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:39.133360  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:39.133425  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:39.170137  644218 cri.go:89] found id: ""
	I0210 14:02:39.170169  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.170180  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:39.170188  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:39.170257  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:39.204690  644218 cri.go:89] found id: ""
	I0210 14:02:39.204722  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.204731  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:39.204738  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:39.204792  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:39.241064  644218 cri.go:89] found id: ""
	I0210 14:02:39.241094  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.241102  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:39.241119  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:39.241178  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:39.279602  644218 cri.go:89] found id: ""
	I0210 14:02:39.279630  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.279638  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:39.279644  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:39.279697  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:39.328061  644218 cri.go:89] found id: ""
	I0210 14:02:39.328089  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.328097  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:39.328105  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:39.328177  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:39.365418  644218 cri.go:89] found id: ""
	I0210 14:02:39.365447  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.365456  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:39.365467  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:39.365478  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:39.418099  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:39.418135  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:39.432723  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:39.432763  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:39.502112  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:39.502144  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:39.502177  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:39.579038  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:39.579088  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:40.117354  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:42.118084  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:44.118769  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:40.981589  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:43.482467  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:42.122820  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:42.135832  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:42.135904  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:42.170673  644218 cri.go:89] found id: ""
	I0210 14:02:42.170713  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.170726  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:42.170735  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:42.170809  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:42.204257  644218 cri.go:89] found id: ""
	I0210 14:02:42.204303  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.204312  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:42.204319  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:42.204383  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:42.238954  644218 cri.go:89] found id: ""
	I0210 14:02:42.238987  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.238999  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:42.239007  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:42.239079  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:42.273753  644218 cri.go:89] found id: ""
	I0210 14:02:42.273784  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.273793  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:42.273800  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:42.273852  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:42.305964  644218 cri.go:89] found id: ""
	I0210 14:02:42.305989  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.305997  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:42.306003  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:42.306055  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:42.340601  644218 cri.go:89] found id: ""
	I0210 14:02:42.340635  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.340645  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:42.340654  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:42.340723  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:42.378707  644218 cri.go:89] found id: ""
	I0210 14:02:42.378743  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.378755  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:42.378765  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:42.378836  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:42.418150  644218 cri.go:89] found id: ""
	I0210 14:02:42.418187  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.418199  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:42.418214  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:42.418238  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:42.432129  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:42.432171  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:42.501810  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:42.501841  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:42.501862  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:42.576752  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:42.576797  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:42.616411  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:42.616441  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:45.171596  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:45.184429  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:45.184514  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:45.219366  644218 cri.go:89] found id: ""
	I0210 14:02:45.219398  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.219410  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:45.219419  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:45.219488  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:45.255638  644218 cri.go:89] found id: ""
	I0210 14:02:45.255670  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.255679  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:45.255685  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:45.255739  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:45.290092  644218 cri.go:89] found id: ""
	I0210 14:02:45.290126  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.290135  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:45.290141  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:45.290207  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:45.327283  644218 cri.go:89] found id: ""
	I0210 14:02:45.327311  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.327320  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:45.327326  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:45.327393  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:45.362888  644218 cri.go:89] found id: ""
	I0210 14:02:45.362929  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.362940  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:45.362949  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:45.363019  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:45.398844  644218 cri.go:89] found id: ""
	I0210 14:02:45.398875  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.398884  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:45.398891  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:45.398947  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:45.434994  644218 cri.go:89] found id: ""
	I0210 14:02:45.435028  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.435040  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:45.435049  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:45.435124  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:45.471469  644218 cri.go:89] found id: ""
	I0210 14:02:45.471500  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.471511  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:45.471526  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:45.471544  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:45.555817  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:45.555860  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:45.597427  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:45.597458  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:45.651433  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:45.651471  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:45.665662  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:45.665691  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:45.733400  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:46.119366  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:48.618698  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:45.980915  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:47.982002  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:49.983484  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:48.233572  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:48.246787  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:48.246865  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:48.282005  644218 cri.go:89] found id: ""
	I0210 14:02:48.282031  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.282040  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:48.282046  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:48.282122  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:48.320510  644218 cri.go:89] found id: ""
	I0210 14:02:48.320542  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.320553  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:48.320569  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:48.320640  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:48.360959  644218 cri.go:89] found id: ""
	I0210 14:02:48.360988  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.360997  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:48.361004  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:48.361056  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:48.399784  644218 cri.go:89] found id: ""
	I0210 14:02:48.399814  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.399825  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:48.399832  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:48.399897  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:48.435401  644218 cri.go:89] found id: ""
	I0210 14:02:48.435433  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.435443  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:48.435451  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:48.435515  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:48.470377  644218 cri.go:89] found id: ""
	I0210 14:02:48.470410  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.470423  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:48.470431  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:48.470501  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:48.513766  644218 cri.go:89] found id: ""
	I0210 14:02:48.513803  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.513812  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:48.513818  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:48.513881  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:48.548542  644218 cri.go:89] found id: ""
	I0210 14:02:48.548574  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.548587  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:48.548599  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:48.548614  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:48.599918  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:48.599954  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:48.614533  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:48.614577  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:48.694464  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:48.694499  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:48.694518  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:48.775406  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:48.775469  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:51.327037  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:51.339986  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:51.340076  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:51.375772  644218 cri.go:89] found id: ""
	I0210 14:02:51.375801  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.375812  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:51.375821  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:51.375885  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:51.414590  644218 cri.go:89] found id: ""
	I0210 14:02:51.414617  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.414626  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:51.414636  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:51.414696  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:51.454903  644218 cri.go:89] found id: ""
	I0210 14:02:51.454934  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.454943  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:51.454952  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:51.455020  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:51.493095  644218 cri.go:89] found id: ""
	I0210 14:02:51.493119  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.493127  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:51.493133  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:51.493185  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:51.529308  644218 cri.go:89] found id: ""
	I0210 14:02:51.529337  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.529345  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:51.529351  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:51.529409  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:51.567667  644218 cri.go:89] found id: ""
	I0210 14:02:51.567692  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.567701  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:51.567708  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:51.567764  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:51.606199  644218 cri.go:89] found id: ""
	I0210 14:02:51.606240  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.606252  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:51.606259  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:51.606326  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:51.639401  644218 cri.go:89] found id: ""
	I0210 14:02:51.639438  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.639451  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:51.639466  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:51.639483  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:51.676250  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:51.676315  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:51.727512  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:51.727556  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:51.744257  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:51.744314  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:51.819189  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:51.819220  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:51.819239  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:51.117018  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:53.117816  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:52.481756  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:54.482040  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:54.397008  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:54.426335  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:54.426398  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:54.461194  644218 cri.go:89] found id: ""
	I0210 14:02:54.461230  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.461239  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:54.461245  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:54.461308  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:54.498546  644218 cri.go:89] found id: ""
	I0210 14:02:54.498574  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.498583  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:54.498591  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:54.498668  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:54.534427  644218 cri.go:89] found id: ""
	I0210 14:02:54.534459  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.534471  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:54.534480  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:54.534536  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:54.570856  644218 cri.go:89] found id: ""
	I0210 14:02:54.570888  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.570898  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:54.570907  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:54.570986  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:54.609274  644218 cri.go:89] found id: ""
	I0210 14:02:54.609316  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.609329  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:54.609339  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:54.609394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:54.650978  644218 cri.go:89] found id: ""
	I0210 14:02:54.651012  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.651024  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:54.651032  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:54.651103  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:54.694455  644218 cri.go:89] found id: ""
	I0210 14:02:54.694486  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.694494  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:54.694500  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:54.694565  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:54.734916  644218 cri.go:89] found id: ""
	I0210 14:02:54.734944  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.734954  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:54.734969  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:54.734985  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:54.781320  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:54.781365  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:54.839551  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:54.839592  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:54.856166  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:54.856198  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:54.937073  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:54.937095  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:54.937108  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:55.118307  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:57.618357  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:56.482199  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:58.982151  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:02:57.515561  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:57.529013  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:57.529077  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:57.566030  644218 cri.go:89] found id: ""
	I0210 14:02:57.566072  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.566083  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:57.566092  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:57.566165  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:57.601983  644218 cri.go:89] found id: ""
	I0210 14:02:57.602020  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.602033  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:57.602047  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:57.602115  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:57.641798  644218 cri.go:89] found id: ""
	I0210 14:02:57.641830  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.641840  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:57.641848  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:57.641918  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:57.677360  644218 cri.go:89] found id: ""
	I0210 14:02:57.677392  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.677405  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:57.677414  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:57.677482  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:57.714634  644218 cri.go:89] found id: ""
	I0210 14:02:57.714667  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.714678  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:57.714685  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:57.714751  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:57.755338  644218 cri.go:89] found id: ""
	I0210 14:02:57.755371  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.755383  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:57.755392  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:57.755457  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:57.792621  644218 cri.go:89] found id: ""
	I0210 14:02:57.792658  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.792672  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:57.792690  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:57.792753  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:57.829844  644218 cri.go:89] found id: ""
	I0210 14:02:57.829879  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.829892  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:57.829907  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:57.829932  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:57.885425  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:57.885462  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:57.899815  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:57.899847  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:57.970164  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:57.970193  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:57.970208  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:58.050373  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:58.050415  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:00.595884  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:00.609913  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:00.610000  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:00.649124  644218 cri.go:89] found id: ""
	I0210 14:03:00.649158  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.649169  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:00.649178  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:00.649252  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:00.686014  644218 cri.go:89] found id: ""
	I0210 14:03:00.686048  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.686058  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:00.686066  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:00.686124  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:00.720878  644218 cri.go:89] found id: ""
	I0210 14:03:00.720908  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.720917  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:00.720924  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:00.720991  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:00.756490  644218 cri.go:89] found id: ""
	I0210 14:03:00.756515  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.756524  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:00.756530  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:00.756581  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:00.804539  644218 cri.go:89] found id: ""
	I0210 14:03:00.804572  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.804583  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:00.804590  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:00.804658  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:00.858778  644218 cri.go:89] found id: ""
	I0210 14:03:00.858811  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.858820  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:00.858828  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:00.858895  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:00.913535  644218 cri.go:89] found id: ""
	I0210 14:03:00.913564  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.913572  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:00.913578  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:00.913642  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:00.959513  644218 cri.go:89] found id: ""
	I0210 14:03:00.959545  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.959556  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:00.959569  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:00.959587  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:01.016776  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:01.016821  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:01.033429  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:01.033464  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:01.118266  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:01.118287  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:01.118303  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:01.205884  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:01.205937  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:00.119697  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:02.618954  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:01.482325  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:03.981215  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:03.753520  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:03.767719  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:03.767790  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:03.802499  644218 cri.go:89] found id: ""
	I0210 14:03:03.802531  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.802542  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:03.802552  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:03.802625  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:03.836771  644218 cri.go:89] found id: ""
	I0210 14:03:03.836808  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.836818  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:03.836824  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:03.836915  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:03.872213  644218 cri.go:89] found id: ""
	I0210 14:03:03.872241  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.872249  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:03.872256  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:03.872321  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:03.907698  644218 cri.go:89] found id: ""
	I0210 14:03:03.907739  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.907751  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:03.907759  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:03.907833  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:03.944625  644218 cri.go:89] found id: ""
	I0210 14:03:03.944655  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.944662  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:03.944668  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:03.944737  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:03.983758  644218 cri.go:89] found id: ""
	I0210 14:03:03.983784  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.983794  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:03.983803  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:03.983888  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:04.019244  644218 cri.go:89] found id: ""
	I0210 14:03:04.019272  644218 logs.go:282] 0 containers: []
	W0210 14:03:04.019280  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:04.019286  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:04.019347  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:04.055800  644218 cri.go:89] found id: ""
	I0210 14:03:04.055831  644218 logs.go:282] 0 containers: []
	W0210 14:03:04.055840  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:04.055850  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:04.055865  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:04.124940  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:04.124968  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:04.124981  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:04.198549  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:04.198589  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:04.242831  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:04.242864  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:04.294003  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:04.294040  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:06.810538  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:06.825419  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:06.825505  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:05.117467  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:07.118356  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:09.617793  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:05.982335  642990 pod_ready.go:103] pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:08.475548  642990 pod_ready.go:82] duration metric: took 4m0.00014172s for pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace to be "Ready" ...
	E0210 14:03:08.475592  642990 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-m682t" in "kube-system" namespace to be "Ready" (will not retry!)
	I0210 14:03:08.475622  642990 pod_ready.go:39] duration metric: took 4m12.045074628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:03:08.475657  642990 kubeadm.go:597] duration metric: took 4m19.976263387s to restartPrimaryControlPlane
	W0210 14:03:08.475752  642990 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 14:03:08.475809  642990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:03:06.860135  644218 cri.go:89] found id: ""
	I0210 14:03:06.860176  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.860186  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:06.860206  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:06.860262  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:06.896110  644218 cri.go:89] found id: ""
	I0210 14:03:06.896142  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.896151  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:06.896172  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:06.896227  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:06.931936  644218 cri.go:89] found id: ""
	I0210 14:03:06.931965  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.931975  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:06.931982  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:06.932039  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:06.968502  644218 cri.go:89] found id: ""
	I0210 14:03:06.968529  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.968537  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:06.968543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:06.968609  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:07.004172  644218 cri.go:89] found id: ""
	I0210 14:03:07.004201  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.004210  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:07.004224  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:07.004308  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:07.037806  644218 cri.go:89] found id: ""
	I0210 14:03:07.037845  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.037857  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:07.037866  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:07.037920  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:07.072468  644218 cri.go:89] found id: ""
	I0210 14:03:07.072502  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.072516  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:07.072524  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:07.072593  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:07.109513  644218 cri.go:89] found id: ""
	I0210 14:03:07.109544  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.109554  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:07.109568  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:07.109585  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:07.162551  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:07.162589  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:07.176535  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:07.176563  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:07.246994  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:07.247029  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:07.247047  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:07.327563  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:07.327611  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:09.876047  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:09.889430  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:09.889512  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:09.922155  644218 cri.go:89] found id: ""
	I0210 14:03:09.922187  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.922199  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:09.922208  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:09.922284  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:09.957894  644218 cri.go:89] found id: ""
	I0210 14:03:09.957929  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.957941  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:09.957949  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:09.958014  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:09.992853  644218 cri.go:89] found id: ""
	I0210 14:03:09.992891  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.992904  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:09.992919  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:09.992998  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:10.028929  644218 cri.go:89] found id: ""
	I0210 14:03:10.028962  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.028978  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:10.028987  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:10.029068  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:10.063936  644218 cri.go:89] found id: ""
	I0210 14:03:10.063982  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.063994  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:10.064003  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:10.064069  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:10.101754  644218 cri.go:89] found id: ""
	I0210 14:03:10.101786  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.101798  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:10.101806  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:10.101865  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:10.140910  644218 cri.go:89] found id: ""
	I0210 14:03:10.140937  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.140945  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:10.140951  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:10.141017  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:10.182602  644218 cri.go:89] found id: ""
	I0210 14:03:10.182629  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.182638  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:10.182651  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:10.182670  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:10.196740  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:10.196776  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:10.269899  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:10.269925  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:10.269952  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:10.349425  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:10.349469  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:10.394256  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:10.394298  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:12.118526  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:14.617198  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:12.948555  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:12.962549  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:12.962658  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:12.998072  644218 cri.go:89] found id: ""
	I0210 14:03:12.998109  644218 logs.go:282] 0 containers: []
	W0210 14:03:12.998122  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:12.998130  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:12.998199  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:13.032802  644218 cri.go:89] found id: ""
	I0210 14:03:13.032842  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.032853  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:13.032859  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:13.032917  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:13.069970  644218 cri.go:89] found id: ""
	I0210 14:03:13.070006  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.070018  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:13.070026  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:13.070096  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:13.103870  644218 cri.go:89] found id: ""
	I0210 14:03:13.103908  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.103921  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:13.103930  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:13.103995  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:13.140166  644218 cri.go:89] found id: ""
	I0210 14:03:13.140202  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.140214  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:13.140222  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:13.140309  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:13.176097  644218 cri.go:89] found id: ""
	I0210 14:03:13.176134  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.176147  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:13.176157  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:13.176234  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:13.210605  644218 cri.go:89] found id: ""
	I0210 14:03:13.210636  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.210645  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:13.210651  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:13.210716  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:13.243129  644218 cri.go:89] found id: ""
	I0210 14:03:13.243159  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.243168  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:13.243181  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:13.243207  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:13.296477  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:13.296519  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:13.310516  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:13.310547  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:13.382486  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:13.382516  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:13.382535  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:13.458590  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:13.458631  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:16.016166  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:16.030318  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:16.030390  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:16.068316  644218 cri.go:89] found id: ""
	I0210 14:03:16.068352  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.068360  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:16.068367  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:16.068422  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:16.104464  644218 cri.go:89] found id: ""
	I0210 14:03:16.104496  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.104505  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:16.104510  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:16.104622  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:16.143770  644218 cri.go:89] found id: ""
	I0210 14:03:16.143804  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.143816  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:16.143824  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:16.143894  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:16.179218  644218 cri.go:89] found id: ""
	I0210 14:03:16.179250  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.179259  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:16.179268  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:16.179323  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:16.221304  644218 cri.go:89] found id: ""
	I0210 14:03:16.221337  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.221346  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:16.221355  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:16.221407  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:16.257960  644218 cri.go:89] found id: ""
	I0210 14:03:16.257995  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.258005  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:16.258012  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:16.258064  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:16.292339  644218 cri.go:89] found id: ""
	I0210 14:03:16.292372  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.292383  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:16.292393  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:16.292463  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:16.326640  644218 cri.go:89] found id: ""
	I0210 14:03:16.326671  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.326683  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:16.326696  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:16.326738  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:16.341765  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:16.341796  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:16.409145  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:16.409172  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:16.409187  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:16.483525  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:16.483568  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:16.523394  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:16.523430  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:16.617415  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:18.618211  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:19.074741  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:19.089545  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:19.089619  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:19.128499  644218 cri.go:89] found id: ""
	I0210 14:03:19.128532  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.128543  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:19.128552  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:19.128621  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:19.163249  644218 cri.go:89] found id: ""
	I0210 14:03:19.163288  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.163301  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:19.163309  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:19.163385  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:19.197204  644218 cri.go:89] found id: ""
	I0210 14:03:19.197242  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.197253  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:19.197261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:19.197329  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:19.232465  644218 cri.go:89] found id: ""
	I0210 14:03:19.232493  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.232501  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:19.232508  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:19.232577  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:19.266055  644218 cri.go:89] found id: ""
	I0210 14:03:19.266080  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.266088  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:19.266094  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:19.266150  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:19.300043  644218 cri.go:89] found id: ""
	I0210 14:03:19.300078  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.300088  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:19.300095  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:19.300158  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:19.336174  644218 cri.go:89] found id: ""
	I0210 14:03:19.336207  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.336220  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:19.336228  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:19.336322  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:19.371913  644218 cri.go:89] found id: ""
	I0210 14:03:19.371941  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.371949  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:19.371959  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:19.371978  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:19.424785  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:19.424828  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:19.439128  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:19.439160  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:19.513243  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:19.513268  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:19.513285  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:19.591125  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:19.591170  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:20.618992  643144 pod_ready.go:103] pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace has status "Ready":"False"
	I0210 14:03:21.118433  643144 pod_ready.go:82] duration metric: took 4m0.006504769s for pod "metrics-server-f79f97bbb-sfblx" in "kube-system" namespace to be "Ready" ...
	E0210 14:03:21.118460  643144 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 14:03:21.118470  643144 pod_ready.go:39] duration metric: took 4m7.415948288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:03:21.118486  643144 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:03:21.118525  643144 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:21.118587  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:21.169515  643144 cri.go:89] found id: "a23211eae9df3a645132b5e363ea205acee2ce84017b22cfae5889eff40ddcb7"
	I0210 14:03:21.169552  643144 cri.go:89] found id: ""
	I0210 14:03:21.169570  643144 logs.go:282] 1 containers: [a23211eae9df3a645132b5e363ea205acee2ce84017b22cfae5889eff40ddcb7]
	I0210 14:03:21.169641  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.174904  643144 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:21.174983  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:21.216842  643144 cri.go:89] found id: "d0f422f05c54a32c5e099a294dc886ccb159f08eedea22c7de8bb483221afcb0"
	I0210 14:03:21.216872  643144 cri.go:89] found id: ""
	I0210 14:03:21.216883  643144 logs.go:282] 1 containers: [d0f422f05c54a32c5e099a294dc886ccb159f08eedea22c7de8bb483221afcb0]
	I0210 14:03:21.216948  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.223600  643144 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:21.223672  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:21.270937  643144 cri.go:89] found id: "5fa85404ca32e0f663c8e49ca2c0aba59745141e609caa112507bab9f4f810bb"
	I0210 14:03:21.270966  643144 cri.go:89] found id: ""
	I0210 14:03:21.270977  643144 logs.go:282] 1 containers: [5fa85404ca32e0f663c8e49ca2c0aba59745141e609caa112507bab9f4f810bb]
	I0210 14:03:21.271045  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.276121  643144 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:21.276197  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:21.321370  643144 cri.go:89] found id: "2422e40a754762370538263a28044cfb099f5dc00b3c7a473c4bad56b456ce77"
	I0210 14:03:21.321394  643144 cri.go:89] found id: ""
	I0210 14:03:21.321408  643144 logs.go:282] 1 containers: [2422e40a754762370538263a28044cfb099f5dc00b3c7a473c4bad56b456ce77]
	I0210 14:03:21.321460  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.326226  643144 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:21.326298  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:21.367151  643144 cri.go:89] found id: "b013f54c38c40df10ceaf7245820f048ef80939a698fa98a8bc2b3bdbcfcef60"
	I0210 14:03:21.367175  643144 cri.go:89] found id: ""
	I0210 14:03:21.367183  643144 logs.go:282] 1 containers: [b013f54c38c40df10ceaf7245820f048ef80939a698fa98a8bc2b3bdbcfcef60]
	I0210 14:03:21.367234  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.371946  643144 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:21.372017  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:21.417671  643144 cri.go:89] found id: "298837e0f2a2489dfa457eb665a3a4b0fb73a36f99777417bf89f2425c061b4d"
	I0210 14:03:21.417707  643144 cri.go:89] found id: ""
	I0210 14:03:21.417718  643144 logs.go:282] 1 containers: [298837e0f2a2489dfa457eb665a3a4b0fb73a36f99777417bf89f2425c061b4d]
	I0210 14:03:21.417788  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.422144  643144 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:21.422212  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:21.463541  643144 cri.go:89] found id: ""
	I0210 14:03:21.463575  643144 logs.go:282] 0 containers: []
	W0210 14:03:21.463583  643144 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:21.463589  643144 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0210 14:03:21.463643  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 14:03:21.503017  643144 cri.go:89] found id: "6a33f10f589fd2893098f7478189bc101658c9c832a5c0f35fa18c3db7d824c8"
	I0210 14:03:21.503048  643144 cri.go:89] found id: "d2bcd0bc967216a03818519f4a6cfbcb2f4d64cc0d38d466ed863394f64d57bf"
	I0210 14:03:21.503055  643144 cri.go:89] found id: ""
	I0210 14:03:21.503064  643144 logs.go:282] 2 containers: [6a33f10f589fd2893098f7478189bc101658c9c832a5c0f35fa18c3db7d824c8 d2bcd0bc967216a03818519f4a6cfbcb2f4d64cc0d38d466ed863394f64d57bf]
	I0210 14:03:21.503121  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.507429  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.511528  643144 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:21.511608  643144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:21.551179  643144 cri.go:89] found id: "b6a52a938f1a978c69a855120a2970eccea51aeb8a404b52cfd777f55a06095b"
	I0210 14:03:21.551214  643144 cri.go:89] found id: ""
	I0210 14:03:21.551233  643144 logs.go:282] 1 containers: [b6a52a938f1a978c69a855120a2970eccea51aeb8a404b52cfd777f55a06095b]
	I0210 14:03:21.551300  643144 ssh_runner.go:195] Run: which crictl
	I0210 14:03:21.555457  643144 logs.go:123] Gathering logs for kube-controller-manager [298837e0f2a2489dfa457eb665a3a4b0fb73a36f99777417bf89f2425c061b4d] ...
	I0210 14:03:21.555482  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 298837e0f2a2489dfa457eb665a3a4b0fb73a36f99777417bf89f2425c061b4d"
	I0210 14:03:21.612435  643144 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:21.612477  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:22.185867  643144 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:22.185918  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:22.206435  643144 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:22.206486  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 14:03:22.363865  643144 logs.go:123] Gathering logs for storage-provisioner [d2bcd0bc967216a03818519f4a6cfbcb2f4d64cc0d38d466ed863394f64d57bf] ...
	I0210 14:03:22.363911  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2bcd0bc967216a03818519f4a6cfbcb2f4d64cc0d38d466ed863394f64d57bf"
	I0210 14:03:22.418041  643144 logs.go:123] Gathering logs for kubernetes-dashboard [b6a52a938f1a978c69a855120a2970eccea51aeb8a404b52cfd777f55a06095b] ...
	I0210 14:03:22.418072  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a52a938f1a978c69a855120a2970eccea51aeb8a404b52cfd777f55a06095b"
	I0210 14:03:22.470327  643144 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:22.470352  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:22.575047  643144 logs.go:123] Gathering logs for kube-apiserver [a23211eae9df3a645132b5e363ea205acee2ce84017b22cfae5889eff40ddcb7] ...
	I0210 14:03:22.575089  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23211eae9df3a645132b5e363ea205acee2ce84017b22cfae5889eff40ddcb7"
	I0210 14:03:22.637547  643144 logs.go:123] Gathering logs for storage-provisioner [6a33f10f589fd2893098f7478189bc101658c9c832a5c0f35fa18c3db7d824c8] ...
	I0210 14:03:22.637587  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a33f10f589fd2893098f7478189bc101658c9c832a5c0f35fa18c3db7d824c8"
	I0210 14:03:22.680814  643144 logs.go:123] Gathering logs for container status ...
	I0210 14:03:22.680853  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:22.732183  643144 logs.go:123] Gathering logs for etcd [d0f422f05c54a32c5e099a294dc886ccb159f08eedea22c7de8bb483221afcb0] ...
	I0210 14:03:22.732224  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0f422f05c54a32c5e099a294dc886ccb159f08eedea22c7de8bb483221afcb0"
	I0210 14:03:22.786650  643144 logs.go:123] Gathering logs for coredns [5fa85404ca32e0f663c8e49ca2c0aba59745141e609caa112507bab9f4f810bb] ...
	I0210 14:03:22.786693  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fa85404ca32e0f663c8e49ca2c0aba59745141e609caa112507bab9f4f810bb"
	I0210 14:03:22.825818  643144 logs.go:123] Gathering logs for kube-scheduler [2422e40a754762370538263a28044cfb099f5dc00b3c7a473c4bad56b456ce77] ...
	I0210 14:03:22.825854  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2422e40a754762370538263a28044cfb099f5dc00b3c7a473c4bad56b456ce77"
	I0210 14:03:22.863082  643144 logs.go:123] Gathering logs for kube-proxy [b013f54c38c40df10ceaf7245820f048ef80939a698fa98a8bc2b3bdbcfcef60] ...
	I0210 14:03:22.863118  643144 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b013f54c38c40df10ceaf7245820f048ef80939a698fa98a8bc2b3bdbcfcef60"
	I0210 14:03:22.132862  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:22.149797  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:22.149870  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:22.189682  644218 cri.go:89] found id: ""
	I0210 14:03:22.189707  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.189716  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:22.189722  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:22.189779  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:22.230353  644218 cri.go:89] found id: ""
	I0210 14:03:22.230386  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.230398  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:22.230407  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:22.230476  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:22.264639  644218 cri.go:89] found id: ""
	I0210 14:03:22.264673  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.264685  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:22.264693  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:22.264781  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:22.300462  644218 cri.go:89] found id: ""
	I0210 14:03:22.300497  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.300508  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:22.300517  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:22.300596  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:22.338620  644218 cri.go:89] found id: ""
	I0210 14:03:22.338652  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.338664  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:22.338672  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:22.338743  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:22.377041  644218 cri.go:89] found id: ""
	I0210 14:03:22.377073  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.377085  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:22.377093  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:22.377164  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:22.423792  644218 cri.go:89] found id: ""
	I0210 14:03:22.423815  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.423822  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:22.423829  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:22.423901  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:22.466237  644218 cri.go:89] found id: ""
	I0210 14:03:22.466268  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.466282  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:22.466293  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:22.466307  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:22.519771  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:22.519815  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:22.534443  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:22.534489  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:22.625188  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:22.625210  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:22.625224  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:22.702516  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:22.702557  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:25.251508  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:25.265701  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:25.265765  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:25.300644  644218 cri.go:89] found id: ""
	I0210 14:03:25.300676  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.300688  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:25.300698  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:25.300778  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:25.337683  644218 cri.go:89] found id: ""
	I0210 14:03:25.337716  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.337727  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:25.337736  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:25.337804  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:25.371570  644218 cri.go:89] found id: ""
	I0210 14:03:25.371608  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.371620  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:25.371627  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:25.371706  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:25.410522  644218 cri.go:89] found id: ""
	I0210 14:03:25.410546  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.410554  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:25.410561  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:25.410625  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:25.459186  644218 cri.go:89] found id: ""
	I0210 14:03:25.459217  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.459229  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:25.459237  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:25.459300  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:25.496445  644218 cri.go:89] found id: ""
	I0210 14:03:25.496471  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.496479  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:25.496485  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:25.496546  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:25.540431  644218 cri.go:89] found id: ""
	I0210 14:03:25.540459  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.540469  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:25.540476  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:25.540551  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:25.591900  644218 cri.go:89] found id: ""
	I0210 14:03:25.591938  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.591951  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:25.591966  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:25.591983  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:25.631755  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:25.631793  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:25.686052  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:25.686086  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:25.700599  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:25.700635  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:25.794403  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:25.794434  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:25.794451  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:27.848414  628186 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0210 14:03:27.848531  628186 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 14:03:27.851025  628186 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 14:03:27.851084  628186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:03:27.851210  628186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:03:27.851315  628186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:03:27.851410  628186 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 14:03:27.851500  628186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:03:27.853206  628186 out.go:235]   - Generating certificates and keys ...
	I0210 14:03:27.853285  628186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:03:27.853354  628186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:03:27.853452  628186 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:03:27.853540  628186 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:03:27.853643  628186 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:03:27.853705  628186 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:03:27.853766  628186 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:03:27.853838  628186 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:03:27.853928  628186 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:03:27.854044  628186 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:03:27.854104  628186 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:03:27.854186  628186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:03:27.854262  628186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:03:27.854348  628186 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 14:03:27.854430  628186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:03:27.854521  628186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:03:27.854607  628186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:03:27.854711  628186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:03:27.854804  628186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:03:27.856224  628186 out.go:235]   - Booting up control plane ...
	I0210 14:03:27.856335  628186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:03:27.856417  628186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:03:27.856504  628186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:03:27.856659  628186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:03:27.856771  628186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:03:27.856832  628186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:03:27.857022  628186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 14:03:27.857184  628186 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 14:03:27.857278  628186 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.324246ms
	I0210 14:03:27.857400  628186 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 14:03:27.857465  628186 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000794479s
	I0210 14:03:27.857472  628186 kubeadm.go:310] 
	I0210 14:03:27.857505  628186 kubeadm.go:310] Unfortunately, an error has occurred:
	I0210 14:03:27.857533  628186 kubeadm.go:310] 	context deadline exceeded
	I0210 14:03:27.857540  628186 kubeadm.go:310] 
	I0210 14:03:27.857568  628186 kubeadm.go:310] This error is likely caused by:
	I0210 14:03:27.857601  628186 kubeadm.go:310] 	- The kubelet is not running
	I0210 14:03:27.857718  628186 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:03:27.857733  628186 kubeadm.go:310] 
	I0210 14:03:27.857845  628186 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:03:27.857889  628186 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0210 14:03:27.857924  628186 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0210 14:03:27.857931  628186 kubeadm.go:310] 
	I0210 14:03:27.858025  628186 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:03:27.858093  628186 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:03:27.858181  628186 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0210 14:03:27.858331  628186 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:03:27.858418  628186 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0210 14:03:27.858596  628186 kubeadm.go:394] duration metric: took 12m14.368913973s to StartCluster
	I0210 14:03:27.858605  628186 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:03:27.858647  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:27.858702  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:27.913401  628186 cri.go:89] found id: "3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2"
	I0210 14:03:27.913429  628186 cri.go:89] found id: ""
	I0210 14:03:27.913438  628186 logs.go:282] 1 containers: [3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2]
	I0210 14:03:27.913490  628186 ssh_runner.go:195] Run: which crictl
	I0210 14:03:27.918035  628186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:27.918091  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:27.955486  628186 cri.go:89] found id: ""
	I0210 14:03:27.955517  628186 logs.go:282] 0 containers: []
	W0210 14:03:27.955525  628186 logs.go:284] No container was found matching "etcd"
	I0210 14:03:27.955531  628186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:27.955587  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:27.995724  628186 cri.go:89] found id: ""
	I0210 14:03:27.995762  628186 logs.go:282] 0 containers: []
	W0210 14:03:27.995774  628186 logs.go:284] No container was found matching "coredns"
	I0210 14:03:27.995782  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:27.995850  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:28.034345  628186 cri.go:89] found id: "c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d"
	I0210 14:03:28.034375  628186 cri.go:89] found id: ""
	I0210 14:03:28.034386  628186 logs.go:282] 1 containers: [c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d]
	I0210 14:03:28.034455  628186 ssh_runner.go:195] Run: which crictl
	I0210 14:03:28.038647  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:28.038727  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:28.078417  628186 cri.go:89] found id: ""
	I0210 14:03:28.078447  628186 logs.go:282] 0 containers: []
	W0210 14:03:28.078456  628186 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:28.078462  628186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:28.078528  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:28.115730  628186 cri.go:89] found id: "77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5"
	I0210 14:03:28.115756  628186 cri.go:89] found id: ""
	I0210 14:03:28.115765  628186 logs.go:282] 1 containers: [77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5]
	I0210 14:03:28.115828  628186 ssh_runner.go:195] Run: which crictl
	I0210 14:03:28.120258  628186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:28.120347  628186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:28.159055  628186 cri.go:89] found id: ""
	I0210 14:03:28.159093  628186 logs.go:282] 0 containers: []
	W0210 14:03:28.159107  628186 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:28.159121  628186 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:28.159156  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:28.246425  628186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:28.246459  628186 logs.go:123] Gathering logs for kube-apiserver [3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2] ...
	I0210 14:03:28.246479  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2"
	I0210 14:03:28.288809  628186 logs.go:123] Gathering logs for kube-scheduler [c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d] ...
	I0210 14:03:28.288842  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d"
	I0210 14:03:28.370694  628186 logs.go:123] Gathering logs for kube-controller-manager [77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5] ...
	I0210 14:03:28.370734  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5"
	I0210 14:03:28.415196  628186 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:28.415237  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:28.653781  628186 logs.go:123] Gathering logs for container status ...
	I0210 14:03:28.653824  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:28.710128  628186 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:28.710160  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:28.844954  628186 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:28.844998  628186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0210 14:03:28.862542  628186 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.324246ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000794479s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 14:03:28.862616  628186 out.go:270] * 
	W0210 14:03:28.862679  628186 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.324246ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000794479s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:03:28.862705  628186 out.go:270] * 
	W0210 14:03:28.863771  628186 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 14:03:28.866784  628186 out.go:201] 
	W0210 14:03:28.867913  628186 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.324246ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000794479s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:03:28.867970  628186 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 14:03:28.868014  628186 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 14:03:28.869338  628186 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.577229575Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ea515cd5354c88a9adfb26a4ff5710724046e2d3aef0f060b7cb7cefe5726bbc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-145767,Uid:b35a9683e30fd248e1ad29a54fff5689,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739195968176362074,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35a9683e30fd248e1ad29a54fff5689,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b35a9683e30fd248e1ad29a54fff5689,kubernetes.io/config.seen: 2025-02-10T13:59:27.722644435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47f07c3d6cb714728af35f295085a599aa3506d64b9a0afbbb868f02c1f68faa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-145767,Ui
d:996851aade2d7af1d9e1503a69ea299d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739195968173610027,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 996851aade2d7af1d9e1503a69ea299d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.134:8443,kubernetes.io/config.hash: 996851aade2d7af1d9e1503a69ea299d,kubernetes.io/config.seen: 2025-02-10T13:59:27.722642635Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c08b9d750b3175f8d2176b5c46294debd7297c93642d61ee89de407c29bbd1eb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-145767,Uid:4c055ed7bc9f951a0788d3db2892f268,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739195968168923663,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-controller-manager-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c055ed7bc9f951a0788d3db2892f268,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4c055ed7bc9f951a0788d3db2892f268,kubernetes.io/config.seen: 2025-02-10T13:59:27.722643589Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c11c45cffbefb6b2db382b34c02820b31b461f628cbcc574b595825eb1040446,Metadata:&PodSandboxMetadata{Name:etcd-pause-145767,Uid:ffae99bf27dfc9b8c8451c0e5ecd01ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739195968161377830,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffae99bf27dfc9b8c8451c0e5ecd01ce,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.134:2379,kubernetes.io/config.hash: ffae99bf27dfc9
b8c8451c0e5ecd01ce,kubernetes.io/config.seen: 2025-02-10T13:59:27.722639258Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fe7968ee-3525-420b-83f9-df9ea61e904d name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.578109275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2392a0c6-851c-4bb1-ac2d-f2fc5847ee32 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.578216546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2392a0c6-851c-4bb1-ac2d-f2fc5847ee32 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.578364004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5,PodSandboxId:c08b9d750b3175f8d2176b5c46294debd7297c93642d61ee89de407c29bbd1eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739196145758978240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c055ed7bc9f951a0788d3db2892f268,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2,PodSandboxId:47f07c3d6cb714728af35f295085a599aa3506d64b9a0afbbb868f02c1f68faa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739196133761673746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 996851aade2d7af1d9e1503a69ea299d,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d,PodSandboxId:ea515cd5354c88a9adfb26a4ff5710724046e2d3aef0f060b7cb7cefe5726bbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739195968448303906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35a9683e30fd248e1ad29a54fff5689,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2392a0c6-851c-4bb1-ac2d-f2fc5847ee32 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.595455376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66dc5a5a-b549-4e1b-9ce7-46e1dd9b0c32 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.595576792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66dc5a5a-b549-4e1b-9ce7-46e1dd9b0c32 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.597892060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f18caed-64ad-45f6-a5a2-635d8b555bbf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.598443097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196209598410651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f18caed-64ad-45f6-a5a2-635d8b555bbf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.599431543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fb3cdcc-1a23-4547-b138-76e30ef309ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.599526411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fb3cdcc-1a23-4547-b138-76e30ef309ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.599651406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5,PodSandboxId:c08b9d750b3175f8d2176b5c46294debd7297c93642d61ee89de407c29bbd1eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739196145758978240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c055ed7bc9f951a0788d3db2892f268,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2,PodSandboxId:47f07c3d6cb714728af35f295085a599aa3506d64b9a0afbbb868f02c1f68faa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739196133761673746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 996851aade2d7af1d9e1503a69ea299d,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d,PodSandboxId:ea515cd5354c88a9adfb26a4ff5710724046e2d3aef0f060b7cb7cefe5726bbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739195968448303906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35a9683e30fd248e1ad29a54fff5689,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fb3cdcc-1a23-4547-b138-76e30ef309ef name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.645839509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96a7ac41-4a6c-4053-a2d7-ebb07cc29f7b name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.645913366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96a7ac41-4a6c-4053-a2d7-ebb07cc29f7b name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.647696449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bae9b40-f93c-4ba6-ae1e-82126c1c5036 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.648391393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196209648358751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bae9b40-f93c-4ba6-ae1e-82126c1c5036 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.649257167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d51c5be-cc0b-443a-97ad-7f68a432c8e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.649393354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d51c5be-cc0b-443a-97ad-7f68a432c8e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.649538140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5,PodSandboxId:c08b9d750b3175f8d2176b5c46294debd7297c93642d61ee89de407c29bbd1eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739196145758978240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c055ed7bc9f951a0788d3db2892f268,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2,PodSandboxId:47f07c3d6cb714728af35f295085a599aa3506d64b9a0afbbb868f02c1f68faa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739196133761673746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 996851aade2d7af1d9e1503a69ea299d,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d,PodSandboxId:ea515cd5354c88a9adfb26a4ff5710724046e2d3aef0f060b7cb7cefe5726bbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739195968448303906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35a9683e30fd248e1ad29a54fff5689,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d51c5be-cc0b-443a-97ad-7f68a432c8e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.693260883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce0cb321-8b91-4c1e-bc15-e8d9439a5eeb name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.693400701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce0cb321-8b91-4c1e-bc15-e8d9439a5eeb name=/runtime.v1.RuntimeService/Version
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.694668279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7ee6660-9754-42ea-81a3-d8779c296a39 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.695215228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196209695189390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7ee6660-9754-42ea-81a3-d8779c296a39 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.696391232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce8e08ec-24e1-4025-a7c9-a3affa9a3e3d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.696463291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce8e08ec-24e1-4025-a7c9-a3affa9a3e3d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:03:29 pause-145767 crio[2785]: time="2025-02-10 14:03:29.696561416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5,PodSandboxId:c08b9d750b3175f8d2176b5c46294debd7297c93642d61ee89de407c29bbd1eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739196145758978240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c055ed7bc9f951a0788d3db2892f268,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2,PodSandboxId:47f07c3d6cb714728af35f295085a599aa3506d64b9a0afbbb868f02c1f68faa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739196133761673746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 996851aade2d7af1d9e1503a69ea299d,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d,PodSandboxId:ea515cd5354c88a9adfb26a4ff5710724046e2d3aef0f060b7cb7cefe5726bbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739195968448303906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35a9683e30fd248e1ad29a54fff5689,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce8e08ec-24e1-4025-a7c9-a3affa9a3e3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	77c1df92a1e7b       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   About a minute ago   Exited              kube-controller-manager   15                  c08b9d750b317       kube-controller-manager-pause-145767
	3f7776b6326e5       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   About a minute ago   Exited              kube-apiserver            15                  47f07c3d6cb71       kube-apiserver-pause-145767
	c713af9bc5545       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   4 minutes ago        Running             kube-scheduler            4                   ea515cd5354c8       kube-scheduler-pause-145767
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.052014] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.214928] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[Feb10 13:49] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.279715] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.408878] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.057910] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.094084] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +6.571709] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.082701] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.314319] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.093239] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[ +10.331058] kauditd_printk_skb: 72 callbacks suppressed
	[ +10.106741] systemd-fstab-generator[2500]: Ignoring "noauto" option for root device
	[  +0.231114] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[  +0.371386] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	[  +0.227081] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[  +0.427032] systemd-fstab-generator[2652]: Ignoring "noauto" option for root device
	[Feb10 13:51] systemd-fstab-generator[2903]: Ignoring "noauto" option for root device
	[  +0.092776] kauditd_printk_skb: 174 callbacks suppressed
	[  +2.558552] systemd-fstab-generator[3304]: Ignoring "noauto" option for root device
	[ +22.753639] kauditd_printk_skb: 103 callbacks suppressed
	[Feb10 13:55] systemd-fstab-generator[8231]: Ignoring "noauto" option for root device
	[ +22.533760] kauditd_printk_skb: 70 callbacks suppressed
	[Feb10 13:59] systemd-fstab-generator[9203]: Ignoring "noauto" option for root device
	[ +22.662612] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 14:03:29 up 14 min,  0 users,  load average: 0.11, 0.18, 0.12
	Linux pause-145767 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2] <==
	I0210 14:02:13.931209       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0210 14:02:14.122162       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:14.122986       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0210 14:02:14.124673       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 14:02:14.140248       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 14:02:14.149301       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 14:02:14.149335       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 14:02:14.149552       1 instance.go:233] Using reconciler: lease
	W0210 14:02:14.150442       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:15.123540       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:15.123540       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:15.151472       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:16.441525       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:16.758454       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:16.994647       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:19.119038       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:19.246487       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:19.297096       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:22.551560       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:22.757223       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:23.486160       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:28.632272       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:29.652555       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0210 14:02:31.305317       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0210 14:02:34.150241       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5] <==
	I0210 14:02:26.412729       1 serving.go:386] Generated self-signed cert in-memory
	I0210 14:02:26.929563       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 14:02:26.929664       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 14:02:26.931724       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 14:02:26.932662       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 14:02:26.932910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 14:02:26.933034       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0210 14:02:45.157478       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.134:8443/healthz\": dial tcp 192.168.39.134:8443: connect: connection refused"
	
	
	==> kube-scheduler [c713af9bc554515332b518b087b2d1a7c7c794be48f31b2417ef9ebc37c6b19d] <==
	E0210 14:02:59.771570       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.134:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:00.772990       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.134:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:00.773256       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.134:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:04.410203       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:04.410349       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.134:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:07.078504       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:07.078620       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.134:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:14.538526       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:14.538593       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.134:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:19.699358       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: Get "https://192.168.39.134:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:19.699417       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: Get \"https://192.168.39.134:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:20.030312       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.134:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:20.030420       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.134:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:21.360821       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.134:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:21.360908       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.134:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:22.095623       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.134:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:22.095834       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.134:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:22.267888       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.134:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:22.268059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.134:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:22.440305       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:22.440383       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.134:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:24.920199       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:24.920602       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.134:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	W0210 14:03:27.710195       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	E0210 14:03:27.710315       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.134:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Feb 10 14:03:18 pause-145767 kubelet[9210]: E0210 14:03:18.762940    9210 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.16-0,Command:[etcd --advertise-client-urls=https://192.168.39.134:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.134:2380 --initial-cluster=pause-145767=https://192.168.39.134:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.134:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.134:2380 --name=pause-145767 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --proxy
-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler
:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-pause-145767_kube-system(ffae99bf27dfc9b8c8451c0e5ecd01ce): CreateContainerError: the container name \"
k8s_etcd_etcd-pause-145767_kube-system_ffae99bf27dfc9b8c8451c0e5ecd01ce_1\" is already in use by de9dc6e7e8d441f864b6ffb6f612478c714ed07ed007bf52a371ccc95f1bcf4b. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Feb 10 14:03:18 pause-145767 kubelet[9210]: E0210 14:03:18.764516    9210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-pause-145767_kube-system_ffae99bf27dfc9b8c8451c0e5ecd01ce_1\\\" is already in use by de9dc6e7e8d441f864b6ffb6f612478c714ed07ed007bf52a371ccc95f1bcf4b. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-pause-145767" podUID="ffae99bf27dfc9b8c8451c0e5ecd01ce"
	Feb 10 14:03:19 pause-145767 kubelet[9210]: W0210 14:03:19.328536    9210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-145767&limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	Feb 10 14:03:19 pause-145767 kubelet[9210]: E0210 14:03:19.328979    9210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-145767&limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	Feb 10 14:03:19 pause-145767 kubelet[9210]: E0210 14:03:19.748910    9210 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145767\" not found" node="pause-145767"
	Feb 10 14:03:19 pause-145767 kubelet[9210]: I0210 14:03:19.748998    9210 scope.go:117] "RemoveContainer" containerID="77c1df92a1e7b9951b0a5c91b3a8180bb3d938994df8ba06be1c12746eac65a5"
	Feb 10 14:03:19 pause-145767 kubelet[9210]: E0210 14:03:19.749120    9210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-145767_kube-system(4c055ed7bc9f951a0788d3db2892f268)\"" pod="kube-system/kube-controller-manager-pause-145767" podUID="4c055ed7bc9f951a0788d3db2892f268"
	Feb 10 14:03:20 pause-145767 kubelet[9210]: E0210 14:03:20.748582    9210 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145767\" not found" node="pause-145767"
	Feb 10 14:03:20 pause-145767 kubelet[9210]: I0210 14:03:20.748665    9210 scope.go:117] "RemoveContainer" containerID="3f7776b6326e5a32d669165cdea7dc131801f0da207460e2875eedb2238fa6d2"
	Feb 10 14:03:20 pause-145767 kubelet[9210]: E0210 14:03:20.748824    9210 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-pause-145767_kube-system(996851aade2d7af1d9e1503a69ea299d)\"" pod="kube-system/kube-apiserver-pause-145767" podUID="996851aade2d7af1d9e1503a69ea299d"
	Feb 10 14:03:23 pause-145767 kubelet[9210]: I0210 14:03:23.176195    9210 kubelet_node_status.go:76] "Attempting to register node" node="pause-145767"
	Feb 10 14:03:23 pause-145767 kubelet[9210]: E0210 14:03:23.177663    9210 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.134:8443: connect: connection refused" node="pause-145767"
	Feb 10 14:03:23 pause-145767 kubelet[9210]: W0210 14:03:23.560979    9210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	Feb 10 14:03:23 pause-145767 kubelet[9210]: E0210 14:03:23.561286    9210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	Feb 10 14:03:24 pause-145767 kubelet[9210]: E0210 14:03:24.171317    9210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-145767?timeout=10s\": dial tcp 192.168.39.134:8443: connect: connection refused" interval="7s"
	Feb 10 14:03:25 pause-145767 kubelet[9210]: E0210 14:03:25.150544    9210 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.134:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-145767.1822dd849dedfabb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-145767,UID:pause-145767,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node pause-145767 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:pause-145767,},FirstTimestamp:2025-02-10 13:59:27.773473467 +0000 UTC m=+0.461453212,LastTimestamp:2025-02-10 13:59:27.773473467 +0000 UTC m=+0.461453212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-145767,}"
	Feb 10 14:03:27 pause-145767 kubelet[9210]: E0210 14:03:27.762918    9210 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 14:03:27 pause-145767 kubelet[9210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 14:03:27 pause-145767 kubelet[9210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 14:03:27 pause-145767 kubelet[9210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 14:03:27 pause-145767 kubelet[9210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 14:03:27 pause-145767 kubelet[9210]: E0210 14:03:27.843133    9210 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196207842629221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 14:03:27 pause-145767 kubelet[9210]: E0210 14:03:27.843166    9210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196207842629221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 10 14:03:28 pause-145767 kubelet[9210]: W0210 14:03:28.871222    9210 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.134:8443: connect: connection refused
	Feb 10 14:03:28 pause-145767 kubelet[9210]: E0210 14:03:28.871364    9210 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.134:8443: connect: connection refused" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145767 -n pause-145767
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145767 -n pause-145767: exit status 2 (251.666093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-145767" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (838.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.854377059s)

                                                
                                                
-- stdout --
	* [old-k8s-version-643105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-643105" primary control-plane node in "old-k8s-version-643105" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:54:28.624980  638492 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:54:28.625156  638492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:54:28.625170  638492 out.go:358] Setting ErrFile to fd 2...
	I0210 13:54:28.625177  638492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:54:28.625497  638492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:54:28.626309  638492 out.go:352] Setting JSON to false
	I0210 13:54:28.627844  638492 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13014,"bootTime":1739182655,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:54:28.627998  638492 start.go:139] virtualization: kvm guest
	I0210 13:54:28.630292  638492 out.go:177] * [old-k8s-version-643105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:54:28.631930  638492 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:54:28.631940  638492 notify.go:220] Checking for updates...
	I0210 13:54:28.634080  638492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:54:28.635373  638492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:54:28.636515  638492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:54:28.637687  638492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:54:28.638998  638492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:54:28.640662  638492 config.go:182] Loaded profile config "bridge-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:54:28.640775  638492 config.go:182] Loaded profile config "flannel-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:54:28.640895  638492 config.go:182] Loaded profile config "pause-145767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:54:28.641006  638492 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:54:28.681137  638492 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:54:28.682182  638492 start.go:297] selected driver: kvm2
	I0210 13:54:28.682196  638492 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:54:28.682207  638492 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:54:28.682908  638492 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:54:28.682996  638492 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 13:54:28.699445  638492 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 13:54:28.699552  638492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 13:54:28.699876  638492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 13:54:28.699918  638492 cni.go:84] Creating CNI manager for ""
	I0210 13:54:28.699986  638492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:54:28.699998  638492 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 13:54:28.700066  638492 start.go:340] cluster config:
	{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:54:28.700199  638492 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 13:54:28.701820  638492 out.go:177] * Starting "old-k8s-version-643105" primary control-plane node in "old-k8s-version-643105" cluster
	I0210 13:54:28.703068  638492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:54:28.703124  638492 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 13:54:28.703137  638492 cache.go:56] Caching tarball of preloaded images
	I0210 13:54:28.703240  638492 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 13:54:28.703253  638492 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 13:54:28.703366  638492 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 13:54:28.703391  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json: {Name:mk31719e3662ec714a53065c0e22ceedde13af9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:54:28.703554  638492 start.go:360] acquireMachinesLock for old-k8s-version-643105: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 13:54:28.703594  638492 start.go:364] duration metric: took 22.982µs to acquireMachinesLock for "old-k8s-version-643105"
	I0210 13:54:28.703615  638492 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 13:54:28.703700  638492 start.go:125] createHost starting for "" (driver="kvm2")
	I0210 13:54:28.705927  638492 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 13:54:28.706115  638492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:54:28.706177  638492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:54:28.721743  638492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0210 13:54:28.722271  638492 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:54:28.722823  638492 main.go:141] libmachine: Using API Version  1
	I0210 13:54:28.722847  638492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:54:28.723181  638492 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:54:28.723393  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 13:54:28.723562  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:28.723764  638492 start.go:159] libmachine.API.Create for "old-k8s-version-643105" (driver="kvm2")
	I0210 13:54:28.723795  638492 client.go:168] LocalClient.Create starting
	I0210 13:54:28.723835  638492 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem
	I0210 13:54:28.723872  638492 main.go:141] libmachine: Decoding PEM data...
	I0210 13:54:28.723888  638492 main.go:141] libmachine: Parsing certificate...
	I0210 13:54:28.723968  638492 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem
	I0210 13:54:28.723997  638492 main.go:141] libmachine: Decoding PEM data...
	I0210 13:54:28.724015  638492 main.go:141] libmachine: Parsing certificate...
	I0210 13:54:28.724038  638492 main.go:141] libmachine: Running pre-create checks...
	I0210 13:54:28.724052  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .PreCreateCheck
	I0210 13:54:28.724462  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetConfigRaw
	I0210 13:54:28.724963  638492 main.go:141] libmachine: Creating machine...
	I0210 13:54:28.724983  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .Create
	I0210 13:54:28.725111  638492 main.go:141] libmachine: (old-k8s-version-643105) creating KVM machine...
	I0210 13:54:28.725124  638492 main.go:141] libmachine: (old-k8s-version-643105) creating network...
	I0210 13:54:28.726379  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found existing default KVM network
	I0210 13:54:28.727543  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:28.727369  638514 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:43:84:d7} reservation:<nil>}
	I0210 13:54:28.728497  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:28.728415  638514 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:1d:fc} reservation:<nil>}
	I0210 13:54:28.729586  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:28.729491  638514 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5a:92:3e} reservation:<nil>}
	I0210 13:54:28.730752  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:28.730670  638514 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285fe0}
	I0210 13:54:28.730774  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | created network xml: 
	I0210 13:54:28.730786  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | <network>
	I0210 13:54:28.730794  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   <name>mk-old-k8s-version-643105</name>
	I0210 13:54:28.730802  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   <dns enable='no'/>
	I0210 13:54:28.730808  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   
	I0210 13:54:28.730818  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0210 13:54:28.730826  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |     <dhcp>
	I0210 13:54:28.730836  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0210 13:54:28.730852  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |     </dhcp>
	I0210 13:54:28.730861  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   </ip>
	I0210 13:54:28.730868  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG |   
	I0210 13:54:28.730873  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | </network>
	I0210 13:54:28.730878  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | 
	I0210 13:54:28.735817  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | trying to create private KVM network mk-old-k8s-version-643105 192.168.72.0/24...
	I0210 13:54:28.820986  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | private KVM network mk-old-k8s-version-643105 192.168.72.0/24 created
	I0210 13:54:28.821020  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:28.820925  638514 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:54:28.821045  638492 main.go:141] libmachine: (old-k8s-version-643105) setting up store path in /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105 ...
	I0210 13:54:28.821057  638492 main.go:141] libmachine: (old-k8s-version-643105) building disk image from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 13:54:28.821103  638492 main.go:141] libmachine: (old-k8s-version-643105) Downloading /home/jenkins/minikube-integration/20390-580861/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 13:54:29.121044  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:29.120917  638514 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa...
	I0210 13:54:29.233583  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:29.233462  638514 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/old-k8s-version-643105.rawdisk...
	I0210 13:54:29.233616  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Writing magic tar header
	I0210 13:54:29.233685  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Writing SSH key tar header
	I0210 13:54:29.233721  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:29.233613  638514 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105 ...
	I0210 13:54:29.233743  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105
	I0210 13:54:29.233811  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105 (perms=drwx------)
	I0210 13:54:29.233847  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube/machines
	I0210 13:54:29.233855  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube/machines (perms=drwxr-xr-x)
	I0210 13:54:29.233866  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins/minikube-integration/20390-580861/.minikube (perms=drwxr-xr-x)
	I0210 13:54:29.233875  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins/minikube-integration/20390-580861 (perms=drwxrwxr-x)
	I0210 13:54:29.233881  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:54:29.233893  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20390-580861
	I0210 13:54:29.233908  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0210 13:54:29.233921  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0210 13:54:29.233934  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home/jenkins
	I0210 13:54:29.233944  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | checking permissions on dir: /home
	I0210 13:54:29.233953  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | skipping /home - not owner
	I0210 13:54:29.233983  638492 main.go:141] libmachine: (old-k8s-version-643105) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0210 13:54:29.234001  638492 main.go:141] libmachine: (old-k8s-version-643105) creating domain...
	I0210 13:54:29.235279  638492 main.go:141] libmachine: (old-k8s-version-643105) define libvirt domain using xml: 
	I0210 13:54:29.235300  638492 main.go:141] libmachine: (old-k8s-version-643105) <domain type='kvm'>
	I0210 13:54:29.235310  638492 main.go:141] libmachine: (old-k8s-version-643105)   <name>old-k8s-version-643105</name>
	I0210 13:54:29.235317  638492 main.go:141] libmachine: (old-k8s-version-643105)   <memory unit='MiB'>2200</memory>
	I0210 13:54:29.235325  638492 main.go:141] libmachine: (old-k8s-version-643105)   <vcpu>2</vcpu>
	I0210 13:54:29.235344  638492 main.go:141] libmachine: (old-k8s-version-643105)   <features>
	I0210 13:54:29.235356  638492 main.go:141] libmachine: (old-k8s-version-643105)     <acpi/>
	I0210 13:54:29.235361  638492 main.go:141] libmachine: (old-k8s-version-643105)     <apic/>
	I0210 13:54:29.235369  638492 main.go:141] libmachine: (old-k8s-version-643105)     <pae/>
	I0210 13:54:29.235379  638492 main.go:141] libmachine: (old-k8s-version-643105)     
	I0210 13:54:29.235410  638492 main.go:141] libmachine: (old-k8s-version-643105)   </features>
	I0210 13:54:29.235434  638492 main.go:141] libmachine: (old-k8s-version-643105)   <cpu mode='host-passthrough'>
	I0210 13:54:29.235444  638492 main.go:141] libmachine: (old-k8s-version-643105)   
	I0210 13:54:29.235451  638492 main.go:141] libmachine: (old-k8s-version-643105)   </cpu>
	I0210 13:54:29.235464  638492 main.go:141] libmachine: (old-k8s-version-643105)   <os>
	I0210 13:54:29.235472  638492 main.go:141] libmachine: (old-k8s-version-643105)     <type>hvm</type>
	I0210 13:54:29.235493  638492 main.go:141] libmachine: (old-k8s-version-643105)     <boot dev='cdrom'/>
	I0210 13:54:29.235504  638492 main.go:141] libmachine: (old-k8s-version-643105)     <boot dev='hd'/>
	I0210 13:54:29.235532  638492 main.go:141] libmachine: (old-k8s-version-643105)     <bootmenu enable='no'/>
	I0210 13:54:29.235555  638492 main.go:141] libmachine: (old-k8s-version-643105)   </os>
	I0210 13:54:29.235588  638492 main.go:141] libmachine: (old-k8s-version-643105)   <devices>
	I0210 13:54:29.235616  638492 main.go:141] libmachine: (old-k8s-version-643105)     <disk type='file' device='cdrom'>
	I0210 13:54:29.235667  638492 main.go:141] libmachine: (old-k8s-version-643105)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/boot2docker.iso'/>
	I0210 13:54:29.235701  638492 main.go:141] libmachine: (old-k8s-version-643105)       <target dev='hdc' bus='scsi'/>
	I0210 13:54:29.235715  638492 main.go:141] libmachine: (old-k8s-version-643105)       <readonly/>
	I0210 13:54:29.235725  638492 main.go:141] libmachine: (old-k8s-version-643105)     </disk>
	I0210 13:54:29.235735  638492 main.go:141] libmachine: (old-k8s-version-643105)     <disk type='file' device='disk'>
	I0210 13:54:29.235747  638492 main.go:141] libmachine: (old-k8s-version-643105)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0210 13:54:29.235758  638492 main.go:141] libmachine: (old-k8s-version-643105)       <source file='/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/old-k8s-version-643105.rawdisk'/>
	I0210 13:54:29.235774  638492 main.go:141] libmachine: (old-k8s-version-643105)       <target dev='hda' bus='virtio'/>
	I0210 13:54:29.235789  638492 main.go:141] libmachine: (old-k8s-version-643105)     </disk>
	I0210 13:54:29.235797  638492 main.go:141] libmachine: (old-k8s-version-643105)     <interface type='network'>
	I0210 13:54:29.235807  638492 main.go:141] libmachine: (old-k8s-version-643105)       <source network='mk-old-k8s-version-643105'/>
	I0210 13:54:29.235816  638492 main.go:141] libmachine: (old-k8s-version-643105)       <model type='virtio'/>
	I0210 13:54:29.235824  638492 main.go:141] libmachine: (old-k8s-version-643105)     </interface>
	I0210 13:54:29.235831  638492 main.go:141] libmachine: (old-k8s-version-643105)     <interface type='network'>
	I0210 13:54:29.235853  638492 main.go:141] libmachine: (old-k8s-version-643105)       <source network='default'/>
	I0210 13:54:29.235889  638492 main.go:141] libmachine: (old-k8s-version-643105)       <model type='virtio'/>
	I0210 13:54:29.235901  638492 main.go:141] libmachine: (old-k8s-version-643105)     </interface>
	I0210 13:54:29.235908  638492 main.go:141] libmachine: (old-k8s-version-643105)     <serial type='pty'>
	I0210 13:54:29.235928  638492 main.go:141] libmachine: (old-k8s-version-643105)       <target port='0'/>
	I0210 13:54:29.235938  638492 main.go:141] libmachine: (old-k8s-version-643105)     </serial>
	I0210 13:54:29.235963  638492 main.go:141] libmachine: (old-k8s-version-643105)     <console type='pty'>
	I0210 13:54:29.236004  638492 main.go:141] libmachine: (old-k8s-version-643105)       <target type='serial' port='0'/>
	I0210 13:54:29.236016  638492 main.go:141] libmachine: (old-k8s-version-643105)     </console>
	I0210 13:54:29.236029  638492 main.go:141] libmachine: (old-k8s-version-643105)     <rng model='virtio'>
	I0210 13:54:29.236043  638492 main.go:141] libmachine: (old-k8s-version-643105)       <backend model='random'>/dev/random</backend>
	I0210 13:54:29.236052  638492 main.go:141] libmachine: (old-k8s-version-643105)     </rng>
	I0210 13:54:29.236060  638492 main.go:141] libmachine: (old-k8s-version-643105)     
	I0210 13:54:29.236070  638492 main.go:141] libmachine: (old-k8s-version-643105)     
	I0210 13:54:29.236078  638492 main.go:141] libmachine: (old-k8s-version-643105)   </devices>
	I0210 13:54:29.236088  638492 main.go:141] libmachine: (old-k8s-version-643105) </domain>
	I0210 13:54:29.236098  638492 main.go:141] libmachine: (old-k8s-version-643105) 
	I0210 13:54:29.239985  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:2c:b6:60 in network default
	I0210 13:54:29.240567  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:29.240587  638492 main.go:141] libmachine: (old-k8s-version-643105) starting domain...
	I0210 13:54:29.240603  638492 main.go:141] libmachine: (old-k8s-version-643105) ensuring networks are active...
	I0210 13:54:29.241169  638492 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network default is active
	I0210 13:54:29.241494  638492 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network mk-old-k8s-version-643105 is active
	I0210 13:54:29.241980  638492 main.go:141] libmachine: (old-k8s-version-643105) getting domain XML...
	I0210 13:54:29.242795  638492 main.go:141] libmachine: (old-k8s-version-643105) creating domain...
	I0210 13:54:30.568208  638492 main.go:141] libmachine: (old-k8s-version-643105) waiting for IP...
	I0210 13:54:30.569204  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:30.569614  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:30.569674  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:30.569602  638514 retry.go:31] will retry after 296.370125ms: waiting for domain to come up
	I0210 13:54:30.868217  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:30.868914  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:30.868951  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:30.868894  638514 retry.go:31] will retry after 261.049876ms: waiting for domain to come up
	I0210 13:54:31.131442  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:31.132014  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:31.132047  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:31.131977  638514 retry.go:31] will retry after 477.003864ms: waiting for domain to come up
	I0210 13:54:31.610369  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:31.610802  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:31.610834  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:31.610789  638514 retry.go:31] will retry after 604.399419ms: waiting for domain to come up
	I0210 13:54:32.216642  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:32.217201  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:32.217233  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:32.217154  638514 retry.go:31] will retry after 684.382964ms: waiting for domain to come up
	I0210 13:54:32.902769  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:32.903277  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:32.903308  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:32.903232  638514 retry.go:31] will retry after 662.768332ms: waiting for domain to come up
	I0210 13:54:33.567823  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:33.568446  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:33.568482  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:33.568410  638514 retry.go:31] will retry after 845.566514ms: waiting for domain to come up
	I0210 13:54:34.416180  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:34.416754  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:34.416830  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:34.416749  638514 retry.go:31] will retry after 898.360399ms: waiting for domain to come up
	I0210 13:54:35.316844  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:35.317404  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:35.317431  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:35.317368  638514 retry.go:31] will retry after 1.49490691s: waiting for domain to come up
	I0210 13:54:36.813575  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:36.814138  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:36.814193  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:36.814130  638514 retry.go:31] will retry after 1.636821117s: waiting for domain to come up
	I0210 13:54:38.453293  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:38.453899  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:38.453953  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:38.453886  638514 retry.go:31] will retry after 2.838331136s: waiting for domain to come up
	I0210 13:54:41.293482  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:41.293897  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:41.293942  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:41.293862  638514 retry.go:31] will retry after 2.462670637s: waiting for domain to come up
	I0210 13:54:43.758097  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:43.758630  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:43.758675  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:43.758602  638514 retry.go:31] will retry after 4.392072904s: waiting for domain to come up
	I0210 13:54:48.155741  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:48.156252  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 13:54:48.156302  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 13:54:48.156218  638514 retry.go:31] will retry after 4.215371358s: waiting for domain to come up
	I0210 13:54:52.375669  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:52.376388  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has current primary IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:52.376416  638492 main.go:141] libmachine: (old-k8s-version-643105) found domain IP: 192.168.72.78
	I0210 13:54:52.376429  638492 main.go:141] libmachine: (old-k8s-version-643105) reserving static IP address...
	I0210 13:54:52.376714  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-643105", mac: "52:54:00:de:ed:f5", ip: "192.168.72.78"} in network mk-old-k8s-version-643105
	I0210 13:54:52.456976  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Getting to WaitForSSH function...
	I0210 13:54:52.457011  638492 main.go:141] libmachine: (old-k8s-version-643105) reserved static IP address 192.168.72.78 for domain old-k8s-version-643105
	I0210 13:54:52.457025  638492 main.go:141] libmachine: (old-k8s-version-643105) waiting for SSH...
	I0210 13:54:52.460675  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:52.460955  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105
	I0210 13:54:52.460987  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find defined IP address of network mk-old-k8s-version-643105 interface with MAC address 52:54:00:de:ed:f5
	I0210 13:54:52.461107  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH client type: external
	I0210 13:54:52.461133  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa (-rw-------)
	I0210 13:54:52.461199  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:54:52.461218  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | About to run SSH command:
	I0210 13:54:52.461232  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | exit 0
	I0210 13:54:52.465107  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | SSH cmd err, output: exit status 255: 
	I0210 13:54:52.465130  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0210 13:54:52.465137  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | command : exit 0
	I0210 13:54:52.465143  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | err     : exit status 255
	I0210 13:54:52.465149  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | output  : 
	I0210 13:54:55.466500  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Getting to WaitForSSH function...
	I0210 13:54:55.468994  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.469514  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:55.469546  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.469685  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH client type: external
	I0210 13:54:55.469732  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa (-rw-------)
	I0210 13:54:55.469766  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 13:54:55.469780  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | About to run SSH command:
	I0210 13:54:55.469807  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | exit 0
	I0210 13:54:55.596697  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | SSH cmd err, output: <nil>: 
	I0210 13:54:55.596909  638492 main.go:141] libmachine: (old-k8s-version-643105) KVM machine creation complete
	I0210 13:54:55.597240  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetConfigRaw
	I0210 13:54:55.597910  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:55.598104  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:55.598283  638492 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0210 13:54:55.598300  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetState
	I0210 13:54:55.599596  638492 main.go:141] libmachine: Detecting operating system of created instance...
	I0210 13:54:55.599612  638492 main.go:141] libmachine: Waiting for SSH to be available...
	I0210 13:54:55.599617  638492 main.go:141] libmachine: Getting to WaitForSSH function...
	I0210 13:54:55.599625  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:55.601991  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.602350  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:55.602398  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.602498  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:55.602662  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.602815  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.602947  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:55.603121  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:55.603322  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:55.603332  638492 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0210 13:54:55.715679  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:54:55.715714  638492 main.go:141] libmachine: Detecting the provisioner...
	I0210 13:54:55.715727  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:55.718602  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.719018  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:55.719048  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.719230  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:55.719412  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.719600  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.719744  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:55.719949  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:55.720126  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:55.720138  638492 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0210 13:54:55.837115  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0210 13:54:55.837187  638492 main.go:141] libmachine: found compatible host: buildroot
	I0210 13:54:55.837194  638492 main.go:141] libmachine: Provisioning with buildroot...
	I0210 13:54:55.837211  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 13:54:55.837496  638492 buildroot.go:166] provisioning hostname "old-k8s-version-643105"
	I0210 13:54:55.837533  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 13:54:55.837716  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:55.840404  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.840798  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:55.840829  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.840982  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:55.841193  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.841330  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.841453  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:55.841577  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:55.841758  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:55.841770  638492 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-643105 && echo "old-k8s-version-643105" | sudo tee /etc/hostname
	I0210 13:54:55.976993  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-643105
	
	I0210 13:54:55.977026  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:55.979980  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.980363  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:55.980400  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:55.980557  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:55.980741  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.980910  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:55.981095  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:55.981283  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:55.981464  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:55.981481  638492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-643105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-643105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-643105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 13:54:56.105795  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 13:54:56.105836  638492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 13:54:56.105860  638492 buildroot.go:174] setting up certificates
	I0210 13:54:56.105872  638492 provision.go:84] configureAuth start
	I0210 13:54:56.105886  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 13:54:56.106182  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 13:54:56.108706  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.109067  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.109096  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.109226  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.111351  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.111668  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.111695  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.111808  638492 provision.go:143] copyHostCerts
	I0210 13:54:56.111896  638492 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 13:54:56.111912  638492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 13:54:56.111973  638492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 13:54:56.112078  638492 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 13:54:56.112090  638492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 13:54:56.112137  638492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 13:54:56.112231  638492 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 13:54:56.112244  638492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 13:54:56.112301  638492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 13:54:56.112375  638492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-643105 san=[127.0.0.1 192.168.72.78 localhost minikube old-k8s-version-643105]
	I0210 13:54:56.260152  638492 provision.go:177] copyRemoteCerts
	I0210 13:54:56.260210  638492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 13:54:56.260237  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.263200  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.263532  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.263563  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.263749  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.263967  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.264118  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.264263  638492 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 13:54:56.351719  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 13:54:56.377599  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 13:54:56.401157  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 13:54:56.424743  638492 provision.go:87] duration metric: took 318.85682ms to configureAuth
	I0210 13:54:56.424773  638492 buildroot.go:189] setting minikube options for container-runtime
	I0210 13:54:56.424968  638492 config.go:182] Loaded profile config "old-k8s-version-643105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 13:54:56.425071  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.427864  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.428231  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.428293  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.428448  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.428656  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.428843  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.428986  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.429154  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:56.429368  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:56.429384  638492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 13:54:56.678568  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 13:54:56.678601  638492 main.go:141] libmachine: Checking connection to Docker...
	I0210 13:54:56.678610  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetURL
	I0210 13:54:56.680080  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | using libvirt version 6000000
	I0210 13:54:56.682378  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.682720  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.682755  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.682892  638492 main.go:141] libmachine: Docker is up and running!
	I0210 13:54:56.682906  638492 main.go:141] libmachine: Reticulating splines...
	I0210 13:54:56.682916  638492 client.go:171] duration metric: took 27.959108258s to LocalClient.Create
	I0210 13:54:56.682951  638492 start.go:167] duration metric: took 27.959189365s to libmachine.API.Create "old-k8s-version-643105"
	I0210 13:54:56.682965  638492 start.go:293] postStartSetup for "old-k8s-version-643105" (driver="kvm2")
	I0210 13:54:56.682977  638492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 13:54:56.683020  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:56.683294  638492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 13:54:56.683317  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.685518  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.685807  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.686141  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.686393  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.686715  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.686872  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.687306  638492 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 13:54:56.775727  638492 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 13:54:56.780252  638492 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 13:54:56.780309  638492 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 13:54:56.780397  638492 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 13:54:56.780478  638492 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 13:54:56.780563  638492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 13:54:56.794688  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:54:56.820697  638492 start.go:296] duration metric: took 137.716665ms for postStartSetup
	I0210 13:54:56.820762  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetConfigRaw
	I0210 13:54:56.821419  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 13:54:56.824294  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.824685  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.824717  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.824984  638492 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 13:54:56.825174  638492 start.go:128] duration metric: took 28.12145779s to createHost
	I0210 13:54:56.825226  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.827856  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.828187  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.828227  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.828435  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.828645  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.828867  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.829021  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.829182  638492 main.go:141] libmachine: Using SSH client type: native
	I0210 13:54:56.829407  638492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 13:54:56.829424  638492 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 13:54:56.953486  638492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739195696.929147114
	
	I0210 13:54:56.953519  638492 fix.go:216] guest clock: 1739195696.929147114
	I0210 13:54:56.953530  638492 fix.go:229] Guest: 2025-02-10 13:54:56.929147114 +0000 UTC Remote: 2025-02-10 13:54:56.825187216 +0000 UTC m=+28.246941590 (delta=103.959898ms)
	I0210 13:54:56.953570  638492 fix.go:200] guest clock delta is within tolerance: 103.959898ms
	I0210 13:54:56.953578  638492 start.go:83] releasing machines lock for "old-k8s-version-643105", held for 28.249972744s
	I0210 13:54:56.953609  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:56.953915  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 13:54:56.956924  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.957412  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.957472  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.957648  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:56.958152  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:56.958345  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 13:54:56.958426  638492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 13:54:56.958472  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.958731  638492 ssh_runner.go:195] Run: cat /version.json
	I0210 13:54:56.958756  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 13:54:56.961433  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.961724  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.961767  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.961790  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.961955  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.962120  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.962237  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:56.962265  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.962270  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:56.962478  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 13:54:56.962487  638492 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 13:54:56.962631  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 13:54:56.962735  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 13:54:56.962883  638492 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 13:54:57.080905  638492 ssh_runner.go:195] Run: systemctl --version
	I0210 13:54:57.088266  638492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 13:54:57.255672  638492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 13:54:57.262539  638492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 13:54:57.262604  638492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 13:54:57.281015  638492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 13:54:57.281045  638492 start.go:495] detecting cgroup driver to use...
	I0210 13:54:57.281129  638492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 13:54:57.301397  638492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 13:54:57.318769  638492 docker.go:217] disabling cri-docker service (if available) ...
	I0210 13:54:57.318829  638492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 13:54:57.335240  638492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 13:54:57.350897  638492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 13:54:57.491723  638492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 13:54:57.690152  638492 docker.go:233] disabling docker service ...
	I0210 13:54:57.690230  638492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 13:54:57.713523  638492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 13:54:57.732054  638492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 13:54:57.920582  638492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 13:54:58.066420  638492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 13:54:58.082106  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 13:54:58.109476  638492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 13:54:58.109550  638492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:54:58.120786  638492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 13:54:58.120860  638492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:54:58.131594  638492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:54:58.143582  638492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 13:54:58.157202  638492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 13:54:58.168339  638492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 13:54:58.181767  638492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 13:54:58.181832  638492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 13:54:58.198280  638492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 13:54:58.211886  638492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:54:58.339691  638492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 13:54:58.433447  638492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 13:54:58.433536  638492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 13:54:58.438646  638492 start.go:563] Will wait 60s for crictl version
	I0210 13:54:58.438704  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:54:58.442826  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 13:54:58.487998  638492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 13:54:58.488083  638492 ssh_runner.go:195] Run: crio --version
	I0210 13:54:58.517079  638492 ssh_runner.go:195] Run: crio --version
	I0210 13:54:58.546319  638492 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 13:54:58.547540  638492 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 13:54:58.550283  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:58.550699  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 14:54:45 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 13:54:58.550728  638492 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 13:54:58.550900  638492 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 13:54:58.554922  638492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:54:58.567939  638492 kubeadm.go:883] updating cluster {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 13:54:58.568110  638492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 13:54:58.568179  638492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:54:58.603631  638492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:54:58.603709  638492 ssh_runner.go:195] Run: which lz4
	I0210 13:54:58.607978  638492 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 13:54:58.612220  638492 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 13:54:58.612265  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 13:55:00.285352  638492 crio.go:462] duration metric: took 1.677408967s to copy over tarball
	I0210 13:55:00.285446  638492 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 13:55:03.273830  638492 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.988353557s)
	I0210 13:55:03.273858  638492 crio.go:469] duration metric: took 2.988470519s to extract the tarball
	I0210 13:55:03.273866  638492 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 13:55:03.331104  638492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 13:55:03.398814  638492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 13:55:03.398847  638492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 13:55:03.398963  638492 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.398981  638492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:03.398987  638492 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 13:55:03.398998  638492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:03.398963  638492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:03.398971  638492 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 13:55:03.399048  638492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:03.398945  638492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:55:03.400230  638492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:03.400548  638492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.400560  638492 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 13:55:03.400562  638492 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 13:55:03.400562  638492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:03.400612  638492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:03.400749  638492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:03.401386  638492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:55:03.572383  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.583736  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:03.603091  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:03.603319  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 13:55:03.603346  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 13:55:03.610007  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:03.671991  638492 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 13:55:03.672043  638492 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.672090  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.692126  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:03.712270  638492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 13:55:03.712342  638492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:03.712388  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.881439  638492 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 13:55:03.881492  638492 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 13:55:03.881534  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.881540  638492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 13:55:03.881575  638492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:03.881583  638492 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 13:55:03.881608  638492 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 13:55:03.881618  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.881642  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.881652  638492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 13:55:03.881674  638492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:03.881700  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.881704  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.881726  638492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 13:55:03.881750  638492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:03.881755  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:03.881777  638492 ssh_runner.go:195] Run: which crictl
	I0210 13:55:03.974064  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:03.974215  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:55:03.974299  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:03.974366  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:55:03.974463  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:03.974529  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:03.974614  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:04.156644  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:04.156717  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:04.156742  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:55:04.160987  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:04.161032  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:55:04.161070  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 13:55:04.161104  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 13:55:04.328401  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 13:55:04.328454  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 13:55:04.328546  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 13:55:04.333215  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 13:55:04.333271  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 13:55:04.333302  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 13:55:04.333319  638492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 13:55:04.455847  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 13:55:04.455878  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 13:55:04.455889  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 13:55:04.466166  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 13:55:04.466283  638492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 13:55:04.501505  638492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 13:55:04.657433  638492 cache_images.go:92] duration metric: took 1.258565938s to LoadCachedImages
	W0210 13:55:04.657515  638492 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0210 13:55:04.657535  638492 kubeadm.go:934] updating node { 192.168.72.78 8443 v1.20.0 crio true true} ...
	I0210 13:55:04.657686  638492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-643105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 13:55:04.657766  638492 ssh_runner.go:195] Run: crio config
	I0210 13:55:04.724104  638492 cni.go:84] Creating CNI manager for ""
	I0210 13:55:04.724135  638492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 13:55:04.724148  638492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 13:55:04.724185  638492 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.78 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-643105 NodeName:old-k8s-version-643105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 13:55:04.724395  638492 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-643105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 13:55:04.724476  638492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 13:55:04.741131  638492 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 13:55:04.741214  638492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 13:55:04.759212  638492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0210 13:55:04.782063  638492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 13:55:04.805121  638492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0210 13:55:04.831102  638492 ssh_runner.go:195] Run: grep 192.168.72.78	control-plane.minikube.internal$ /etc/hosts
	I0210 13:55:04.835737  638492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 13:55:04.857180  638492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 13:55:04.994804  638492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 13:55:05.017793  638492 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105 for IP: 192.168.72.78
	I0210 13:55:05.017822  638492 certs.go:194] generating shared ca certs ...
	I0210 13:55:05.017845  638492 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.018039  638492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 13:55:05.018104  638492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 13:55:05.018118  638492 certs.go:256] generating profile certs ...
	I0210 13:55:05.018198  638492 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.key
	I0210 13:55:05.018220  638492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.crt with IP's: []
	I0210 13:55:05.159979  638492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.crt ...
	I0210 13:55:05.160008  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.crt: {Name:mka56f180470c797b6db634aaa833c19238447f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.160204  638492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.key ...
	I0210 13:55:05.160219  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.key: {Name:mk6ba216836349798f2ae5ce131301c2a1f8e03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.256412  638492 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7
	I0210 13:55:05.256479  638492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt.2b43ede7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.78]
	I0210 13:55:05.474787  638492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt.2b43ede7 ...
	I0210 13:55:05.474908  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt.2b43ede7: {Name:mk33ab1dbdf1d69d7f11e59142dc782b5ffae3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.475182  638492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7 ...
	I0210 13:55:05.475238  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7: {Name:mk88115c067b5a25c080459593a863e353ac9866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.475405  638492 certs.go:381] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt.2b43ede7 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt
	I0210 13:55:05.475564  638492 certs.go:385] copying /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7 -> /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key
	I0210 13:55:05.475696  638492 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key
	I0210 13:55:05.475743  638492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt with IP's: []
	I0210 13:55:05.579198  638492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt ...
	I0210 13:55:05.579232  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt: {Name:mk950540a106cf89430bdb6378e0fb460a081e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.669601  638492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key ...
	I0210 13:55:05.669656  638492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key: {Name:mk9d4e63302ea0b850d6382d951559e540c2827b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 13:55:05.669973  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 13:55:05.670038  638492 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 13:55:05.670056  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 13:55:05.670091  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 13:55:05.670128  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 13:55:05.670163  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 13:55:05.670228  638492 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 13:55:05.671225  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 13:55:05.702000  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 13:55:05.728930  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 13:55:05.755126  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 13:55:05.779867  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 13:55:05.805923  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 13:55:05.839368  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 13:55:05.870659  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 13:55:05.907335  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 13:55:05.942635  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 13:55:05.979824  638492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 13:55:06.032149  638492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 13:55:06.058314  638492 ssh_runner.go:195] Run: openssl version
	I0210 13:55:06.065161  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 13:55:06.083231  638492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:55:06.089323  638492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:55:06.089404  638492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 13:55:06.097357  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 13:55:06.109820  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 13:55:06.121308  638492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 13:55:06.126472  638492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 13:55:06.126536  638492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 13:55:06.132842  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 13:55:06.144499  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 13:55:06.155667  638492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 13:55:06.161334  638492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 13:55:06.161389  638492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 13:55:06.167188  638492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 13:55:06.178867  638492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 13:55:06.183370  638492 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 13:55:06.183438  638492 kubeadm.go:392] StartCluster: {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 13:55:06.183532  638492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 13:55:06.183574  638492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 13:55:06.226646  638492 cri.go:89] found id: ""
	I0210 13:55:06.226746  638492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 13:55:06.239526  638492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 13:55:06.250424  638492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:55:06.261153  638492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:55:06.261180  638492 kubeadm.go:157] found existing configuration files:
	
	I0210 13:55:06.261230  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:55:06.273421  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:55:06.273479  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:55:06.286532  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:55:06.298596  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:55:06.298661  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:55:06.310378  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:55:06.322600  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:55:06.322665  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:55:06.333816  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:55:06.345635  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:55:06.345698  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:55:06.358656  638492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:55:06.570572  638492 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:55:06.570685  638492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:55:06.761275  638492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:55:06.761411  638492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:55:06.761549  638492 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:55:07.018651  638492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:55:07.021233  638492 out.go:235]   - Generating certificates and keys ...
	I0210 13:55:07.021328  638492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:55:07.021410  638492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:55:07.198162  638492 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 13:55:07.551257  638492 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 13:55:07.723725  638492 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 13:55:07.944976  638492 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 13:55:08.291226  638492 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 13:55:08.292784  638492 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	I0210 13:55:08.434282  638492 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 13:55:08.434467  638492 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	I0210 13:55:08.606870  638492 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 13:55:09.318455  638492 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 13:55:09.553364  638492 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 13:55:09.554132  638492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:55:09.634525  638492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:55:09.926078  638492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:55:10.041495  638492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:55:10.390670  638492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:55:10.415888  638492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:55:10.418096  638492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:55:10.418166  638492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:55:10.600861  638492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:55:10.602501  638492 out.go:235]   - Booting up control plane ...
	I0210 13:55:10.602631  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:55:10.610718  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:55:10.612421  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:55:10.613574  638492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:55:10.619713  638492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:55:50.616902  638492 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:55:50.618022  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:55:50.618288  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:55:55.618124  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:55:55.618381  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:56:05.618336  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:56:05.618605  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:56:25.618394  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:56:25.618700  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:57:05.620118  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:57:05.620356  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:57:05.620370  638492 kubeadm.go:310] 
	I0210 13:57:05.620402  638492 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:57:05.620435  638492 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:57:05.620442  638492 kubeadm.go:310] 
	I0210 13:57:05.620468  638492 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:57:05.620496  638492 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:57:05.620583  638492 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:57:05.620591  638492 kubeadm.go:310] 
	I0210 13:57:05.620674  638492 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:57:05.620702  638492 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:57:05.620734  638492 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:57:05.620764  638492 kubeadm.go:310] 
	I0210 13:57:05.620910  638492 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:57:05.621013  638492 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:57:05.621024  638492 kubeadm.go:310] 
	I0210 13:57:05.621136  638492 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:57:05.621217  638492 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:57:05.621289  638492 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:57:05.621374  638492 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:57:05.621389  638492 kubeadm.go:310] 
	I0210 13:57:05.622231  638492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:57:05.622356  638492 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:57:05.622474  638492 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 13:57:05.622664  638492 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-643105] and IPs [192.168.72.78 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 13:57:05.622710  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 13:57:06.157226  638492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:57:06.171804  638492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 13:57:06.182186  638492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 13:57:06.182213  638492 kubeadm.go:157] found existing configuration files:
	
	I0210 13:57:06.182270  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 13:57:06.192524  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 13:57:06.192604  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 13:57:06.203561  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 13:57:06.213544  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 13:57:06.213620  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 13:57:06.224619  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 13:57:06.235288  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 13:57:06.235366  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 13:57:06.247537  638492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 13:57:06.259137  638492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 13:57:06.259204  638492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 13:57:06.271316  638492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 13:57:06.344753  638492 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 13:57:06.344846  638492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 13:57:06.502788  638492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 13:57:06.502940  638492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 13:57:06.503075  638492 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 13:57:06.701041  638492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 13:57:06.703323  638492 out.go:235]   - Generating certificates and keys ...
	I0210 13:57:06.703437  638492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 13:57:06.703541  638492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 13:57:06.703657  638492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 13:57:06.703734  638492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 13:57:06.703839  638492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 13:57:06.703925  638492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 13:57:06.704012  638492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 13:57:06.704109  638492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 13:57:06.704226  638492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 13:57:06.704352  638492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 13:57:06.704409  638492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 13:57:06.704492  638492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 13:57:06.973097  638492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 13:57:07.059382  638492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 13:57:07.409642  638492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 13:57:07.502515  638492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 13:57:07.517195  638492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 13:57:07.518259  638492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 13:57:07.518320  638492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 13:57:07.657385  638492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 13:57:07.660082  638492 out.go:235]   - Booting up control plane ...
	I0210 13:57:07.660237  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 13:57:07.670475  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 13:57:07.671510  638492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 13:57:07.672372  638492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 13:57:07.674730  638492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 13:57:47.677305  638492 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 13:57:47.677818  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:57:47.678027  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:57:52.678243  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:57:52.678466  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:58:02.678472  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:58:02.678648  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:58:22.678008  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:58:22.678318  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:59:02.677980  638492 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 13:59:02.678267  638492 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 13:59:02.678296  638492 kubeadm.go:310] 
	I0210 13:59:02.678349  638492 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 13:59:02.678408  638492 kubeadm.go:310] 		timed out waiting for the condition
	I0210 13:59:02.678421  638492 kubeadm.go:310] 
	I0210 13:59:02.678467  638492 kubeadm.go:310] 	This error is likely caused by:
	I0210 13:59:02.678524  638492 kubeadm.go:310] 		- The kubelet is not running
	I0210 13:59:02.678669  638492 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 13:59:02.678684  638492 kubeadm.go:310] 
	I0210 13:59:02.678810  638492 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 13:59:02.678860  638492 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 13:59:02.678904  638492 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 13:59:02.678913  638492 kubeadm.go:310] 
	I0210 13:59:02.679044  638492 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 13:59:02.679156  638492 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 13:59:02.679169  638492 kubeadm.go:310] 
	I0210 13:59:02.679297  638492 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 13:59:02.679405  638492 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 13:59:02.679505  638492 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 13:59:02.679598  638492 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 13:59:02.679609  638492 kubeadm.go:310] 
	I0210 13:59:02.680509  638492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 13:59:02.680630  638492 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 13:59:02.680744  638492 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 13:59:02.680840  638492 kubeadm.go:394] duration metric: took 3m56.497407172s to StartCluster
	I0210 13:59:02.680928  638492 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 13:59:02.681006  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 13:59:02.730494  638492 cri.go:89] found id: ""
	I0210 13:59:02.730526  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.730535  638492 logs.go:284] No container was found matching "kube-apiserver"
	I0210 13:59:02.730542  638492 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 13:59:02.730607  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 13:59:02.782120  638492 cri.go:89] found id: ""
	I0210 13:59:02.782153  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.782164  638492 logs.go:284] No container was found matching "etcd"
	I0210 13:59:02.782175  638492 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 13:59:02.782262  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 13:59:02.822062  638492 cri.go:89] found id: ""
	I0210 13:59:02.822089  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.822097  638492 logs.go:284] No container was found matching "coredns"
	I0210 13:59:02.822102  638492 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 13:59:02.822154  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 13:59:02.871480  638492 cri.go:89] found id: ""
	I0210 13:59:02.871513  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.871524  638492 logs.go:284] No container was found matching "kube-scheduler"
	I0210 13:59:02.871532  638492 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 13:59:02.871602  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 13:59:02.924878  638492 cri.go:89] found id: ""
	I0210 13:59:02.924910  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.924918  638492 logs.go:284] No container was found matching "kube-proxy"
	I0210 13:59:02.924924  638492 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 13:59:02.924997  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 13:59:02.972988  638492 cri.go:89] found id: ""
	I0210 13:59:02.973021  638492 logs.go:282] 0 containers: []
	W0210 13:59:02.973032  638492 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 13:59:02.973044  638492 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 13:59:02.973121  638492 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 13:59:03.014915  638492 cri.go:89] found id: ""
	I0210 13:59:03.014953  638492 logs.go:282] 0 containers: []
	W0210 13:59:03.014973  638492 logs.go:284] No container was found matching "kindnet"
	I0210 13:59:03.014997  638492 logs.go:123] Gathering logs for describe nodes ...
	I0210 13:59:03.015026  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 13:59:03.169121  638492 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 13:59:03.169156  638492 logs.go:123] Gathering logs for CRI-O ...
	I0210 13:59:03.169173  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 13:59:03.293357  638492 logs.go:123] Gathering logs for container status ...
	I0210 13:59:03.293398  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 13:59:03.345545  638492 logs.go:123] Gathering logs for kubelet ...
	I0210 13:59:03.345575  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 13:59:03.398607  638492 logs.go:123] Gathering logs for dmesg ...
	I0210 13:59:03.398648  638492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0210 13:59:03.414452  638492 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 13:59:03.414519  638492 out.go:270] * 
	* 
	W0210 13:59:03.414593  638492 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:59:03.414614  638492 out.go:270] * 
	* 
	W0210 13:59:03.415574  638492 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 13:59:03.418844  638492 out.go:201] 
	W0210 13:59:03.420094  638492 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 13:59:03.420133  638492 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 13:59:03.420154  638492 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 13:59:03.421604  638492 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 6 (255.756281ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:59:03.721796  643518 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-643105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-643105 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-643105 create -f testdata/busybox.yaml: exit status 1 (59.943735ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-643105" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-643105 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 6 (232.478449ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:59:04.015685  643558 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-643105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 6 (264.210508ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:59:04.277199  643588 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-643105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-643105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0210 13:59:05.295643  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:06.267377  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:10.417863  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:20.659814  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.564749  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.571117  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.582548  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.603987  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.645480  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.726853  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:35.888395  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:36.210274  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:36.852598  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:38.134466  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:38.596349  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:40.695958  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:41.141507  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:45.818027  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.442653  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.449063  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.460442  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.481835  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.523278  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.604744  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:47.766310  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:48.088211  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:48.730354  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:50.011895  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:52.573743  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:56.060325  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:57.695064  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:07.937097  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:16.542559  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:22.103730  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:28.189543  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:28.419111  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:33.642484  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-643105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.432086632s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-643105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-643105 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-643105 describe deploy/metrics-server -n kube-system: exit status 1 (47.976102ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-643105" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-643105 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 6 (243.175315ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 14:00:39.005335  644083 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-643105" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0210 14:00:43.815883  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:00:57.504908  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:00.517709  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:09.380888  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:11.518962  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:18.943011  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:44.025722  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:01:46.648460  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:02:13.584859  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:02:19.427217  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:02:31.302709  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:02:44.328169  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:03:12.031913  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:03:16.655821  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.546295905s)

                                                
                                                
-- stdout --
	* [old-k8s-version-643105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-643105" primary control-plane node in "old-k8s-version-643105" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-643105" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 14:00:41.840976  644218 out.go:345] Setting OutFile to fd 1 ...
	I0210 14:00:41.841244  644218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:00:41.841254  644218 out.go:358] Setting ErrFile to fd 2...
	I0210 14:00:41.841258  644218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:00:41.841448  644218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 14:00:41.841985  644218 out.go:352] Setting JSON to false
	I0210 14:00:41.843021  644218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13387,"bootTime":1739182655,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 14:00:41.843087  644218 start.go:139] virtualization: kvm guest
	I0210 14:00:41.845249  644218 out.go:177] * [old-k8s-version-643105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 14:00:41.847199  644218 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 14:00:41.847096  644218 notify.go:220] Checking for updates...
	I0210 14:00:41.850042  644218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 14:00:41.851411  644218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:00:41.852668  644218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 14:00:41.853832  644218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 14:00:41.855061  644218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 14:00:41.856768  644218 config.go:182] Loaded profile config "old-k8s-version-643105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 14:00:41.857113  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.857187  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.872520  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0210 14:00:41.873024  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.873633  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.873656  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.873983  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.874226  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.875969  644218 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 14:00:41.877309  644218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 14:00:41.877801  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.877851  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.893162  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0210 14:00:41.893566  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.894098  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.894123  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.894424  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.894610  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.929538  644218 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 14:00:41.930719  644218 start.go:297] selected driver: kvm2
	I0210 14:00:41.930738  644218 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-6
43105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:00:41.930864  644218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 14:00:41.931823  644218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:00:41.931930  644218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 14:00:41.946635  644218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 14:00:41.947040  644218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:00:41.947074  644218 cni.go:84] Creating CNI manager for ""
	I0210 14:00:41.947123  644218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:00:41.947165  644218 start.go:340] cluster config:
	{Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:00:41.947263  644218 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:00:41.949650  644218 out.go:177] * Starting "old-k8s-version-643105" primary control-plane node in "old-k8s-version-643105" cluster
	I0210 14:00:41.951045  644218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 14:00:41.951088  644218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 14:00:41.951103  644218 cache.go:56] Caching tarball of preloaded images
	I0210 14:00:41.951198  644218 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 14:00:41.951214  644218 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0210 14:00:41.951327  644218 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 14:00:41.951499  644218 start.go:360] acquireMachinesLock for old-k8s-version-643105: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 14:00:41.951543  644218 start.go:364] duration metric: took 25.67µs to acquireMachinesLock for "old-k8s-version-643105"
	I0210 14:00:41.951571  644218 start.go:96] Skipping create...Using existing machine configuration
	I0210 14:00:41.951579  644218 fix.go:54] fixHost starting: 
	I0210 14:00:41.951830  644218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:00:41.951874  644218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:00:41.965913  644218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0210 14:00:41.966370  644218 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:00:41.966823  644218 main.go:141] libmachine: Using API Version  1
	I0210 14:00:41.966844  644218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:00:41.967114  644218 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:00:41.967305  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:00:41.967438  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetState
	I0210 14:00:41.968911  644218 fix.go:112] recreateIfNeeded on old-k8s-version-643105: state=Stopped err=<nil>
	I0210 14:00:41.968939  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	W0210 14:00:41.969085  644218 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 14:00:41.970891  644218 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-643105" ...
	I0210 14:00:41.971991  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .Start
	I0210 14:00:41.972215  644218 main.go:141] libmachine: (old-k8s-version-643105) starting domain...
	I0210 14:00:41.972236  644218 main.go:141] libmachine: (old-k8s-version-643105) ensuring networks are active...
	I0210 14:00:41.973021  644218 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network default is active
	I0210 14:00:41.973394  644218 main.go:141] libmachine: (old-k8s-version-643105) Ensuring network mk-old-k8s-version-643105 is active
	I0210 14:00:41.973735  644218 main.go:141] libmachine: (old-k8s-version-643105) getting domain XML...
	I0210 14:00:41.974618  644218 main.go:141] libmachine: (old-k8s-version-643105) creating domain...
	I0210 14:00:43.237958  644218 main.go:141] libmachine: (old-k8s-version-643105) waiting for IP...
	I0210 14:00:43.238951  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.239391  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.239494  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.239388  644254 retry.go:31] will retry after 206.182886ms: waiting for domain to come up
	I0210 14:00:43.446966  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.447547  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.447574  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.447518  644254 retry.go:31] will retry after 329.362933ms: waiting for domain to come up
	I0210 14:00:43.777967  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:43.778519  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:43.778554  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:43.778477  644254 retry.go:31] will retry after 346.453199ms: waiting for domain to come up
	I0210 14:00:44.127152  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:44.127724  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:44.127781  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:44.127714  644254 retry.go:31] will retry after 369.587225ms: waiting for domain to come up
	I0210 14:00:44.499259  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:44.499894  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:44.499927  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:44.499829  644254 retry.go:31] will retry after 551.579789ms: waiting for domain to come up
	I0210 14:00:45.052851  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:45.053389  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:45.053422  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:45.053344  644254 retry.go:31] will retry after 842.776955ms: waiting for domain to come up
	I0210 14:00:45.897296  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:45.897745  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:45.897769  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:45.897724  644254 retry.go:31] will retry after 1.081690621s: waiting for domain to come up
	I0210 14:00:46.980845  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:46.981454  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:46.981483  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:46.981421  644254 retry.go:31] will retry after 1.310681169s: waiting for domain to come up
	I0210 14:00:48.293826  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:48.294265  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:48.294298  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:48.294220  644254 retry.go:31] will retry after 1.237090549s: waiting for domain to come up
	I0210 14:00:49.533469  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:49.534006  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:49.534094  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:49.533968  644254 retry.go:31] will retry after 1.844597316s: waiting for domain to come up
	I0210 14:00:51.379889  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:51.380473  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:51.380503  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:51.380434  644254 retry.go:31] will retry after 2.170543895s: waiting for domain to come up
	I0210 14:00:53.553350  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:53.553858  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:53.553887  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:53.553814  644254 retry.go:31] will retry after 3.463243718s: waiting for domain to come up
	I0210 14:00:57.018476  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:57.018995  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | unable to find current IP address of domain old-k8s-version-643105 in network mk-old-k8s-version-643105
	I0210 14:00:57.019016  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | I0210 14:00:57.018938  644254 retry.go:31] will retry after 2.849149701s: waiting for domain to come up
	I0210 14:00:59.871921  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.872407  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has current primary IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.872442  644218 main.go:141] libmachine: (old-k8s-version-643105) found domain IP: 192.168.72.78
	I0210 14:00:59.872459  644218 main.go:141] libmachine: (old-k8s-version-643105) reserving static IP address...
	I0210 14:00:59.872874  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "old-k8s-version-643105", mac: "52:54:00:de:ed:f5", ip: "192.168.72.78"} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:00:59.872912  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | skip adding static IP to network mk-old-k8s-version-643105 - found existing host DHCP lease matching {name: "old-k8s-version-643105", mac: "52:54:00:de:ed:f5", ip: "192.168.72.78"}
	I0210 14:00:59.872926  644218 main.go:141] libmachine: (old-k8s-version-643105) reserved static IP address 192.168.72.78 for domain old-k8s-version-643105
	I0210 14:00:59.872949  644218 main.go:141] libmachine: (old-k8s-version-643105) waiting for SSH...
	I0210 14:00:59.872967  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Getting to WaitForSSH function...
	I0210 14:00:59.874962  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.875311  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:00:59.875344  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:00:59.875469  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH client type: external
	I0210 14:00:59.875491  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa (-rw-------)
	I0210 14:00:59.875537  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 14:00:59.875555  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | About to run SSH command:
	I0210 14:00:59.875568  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | exit 0
	I0210 14:00:59.996273  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | SSH cmd err, output: <nil>: 
	I0210 14:00:59.996664  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetConfigRaw
	I0210 14:00:59.997452  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:00:59.999899  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.000417  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.000441  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.000725  644218 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/config.json ...
	I0210 14:01:00.000950  644218 machine.go:93] provisionDockerMachine start ...
	I0210 14:01:00.000973  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:00.001218  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.003616  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.003975  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.004009  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.004135  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.004346  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.004533  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.004647  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.004837  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.005071  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.005083  644218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 14:01:00.104866  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 14:01:00.104903  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.105187  644218 buildroot.go:166] provisioning hostname "old-k8s-version-643105"
	I0210 14:01:00.105215  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.105403  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.108197  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.108678  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.108707  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.108836  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.109038  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.109213  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.109374  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.109547  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.109792  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.109807  644218 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-643105 && echo "old-k8s-version-643105" | sudo tee /etc/hostname
	I0210 14:01:00.227428  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-643105
	
	I0210 14:01:00.227461  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.230205  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.230529  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.230560  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.230756  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.230987  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.231161  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.231272  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.231422  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.231655  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.231680  644218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-643105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-643105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-643105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 14:01:00.346932  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 14:01:00.346964  644218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 14:01:00.347020  644218 buildroot.go:174] setting up certificates
	I0210 14:01:00.347031  644218 provision.go:84] configureAuth start
	I0210 14:01:00.347041  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetMachineName
	I0210 14:01:00.347306  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:00.350130  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.350530  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.350567  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.350764  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.353240  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.353564  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.353610  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.353714  644218 provision.go:143] copyHostCerts
	I0210 14:01:00.353795  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 14:01:00.353810  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 14:01:00.353892  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 14:01:00.354042  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 14:01:00.354055  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 14:01:00.354100  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 14:01:00.354190  644218 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 14:01:00.354200  644218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 14:01:00.354235  644218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 14:01:00.354321  644218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-643105 san=[127.0.0.1 192.168.72.78 localhost minikube old-k8s-version-643105]
	I0210 14:01:00.582524  644218 provision.go:177] copyRemoteCerts
	I0210 14:01:00.582605  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 14:01:00.582641  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.585672  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.586128  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.586164  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.586335  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.586557  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.586701  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.586806  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:00.667733  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 14:01:00.694010  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 14:01:00.719848  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 14:01:00.745526  644218 provision.go:87] duration metric: took 398.480071ms to configureAuth
	I0210 14:01:00.745561  644218 buildroot.go:189] setting minikube options for container-runtime
	I0210 14:01:00.745788  644218 config.go:182] Loaded profile config "old-k8s-version-643105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0210 14:01:00.745891  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.748846  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.749225  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.749256  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.749467  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.749682  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.749863  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.749997  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.750138  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:00.750322  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:00.750341  644218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 14:01:00.990441  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 14:01:00.990484  644218 machine.go:96] duration metric: took 989.502089ms to provisionDockerMachine
	I0210 14:01:00.990496  644218 start.go:293] postStartSetup for "old-k8s-version-643105" (driver="kvm2")
	I0210 14:01:00.990509  644218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 14:01:00.990526  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:00.990830  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 14:01:00.990865  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:00.993504  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.993870  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:00.993909  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:00.994111  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:00.994281  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:00.994462  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:00.994624  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.076590  644218 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 14:01:01.081371  644218 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 14:01:01.081401  644218 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 14:01:01.081474  644218 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 14:01:01.081597  644218 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 14:01:01.081759  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 14:01:01.091951  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:01:01.117344  644218 start.go:296] duration metric: took 126.828836ms for postStartSetup
	I0210 14:01:01.117395  644218 fix.go:56] duration metric: took 19.165814332s for fixHost
	I0210 14:01:01.117426  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.120411  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.120784  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.120826  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.120963  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.121266  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.121451  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.121603  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.121806  644218 main.go:141] libmachine: Using SSH client type: native
	I0210 14:01:01.121987  644218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.78 22 <nil> <nil>}
	I0210 14:01:01.122000  644218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 14:01:01.225245  644218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739196061.196371401
	
	I0210 14:01:01.225274  644218 fix.go:216] guest clock: 1739196061.196371401
	I0210 14:01:01.225284  644218 fix.go:229] Guest: 2025-02-10 14:01:01.196371401 +0000 UTC Remote: 2025-02-10 14:01:01.117401189 +0000 UTC m=+19.314698018 (delta=78.970212ms)
	I0210 14:01:01.225307  644218 fix.go:200] guest clock delta is within tolerance: 78.970212ms
	I0210 14:01:01.225312  644218 start.go:83] releasing machines lock for "old-k8s-version-643105", held for 19.273758703s
	I0210 14:01:01.225331  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.225635  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:01.228728  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.229154  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.229184  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.229307  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.229831  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.230027  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .DriverName
	I0210 14:01:01.230136  644218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 14:01:01.230183  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.230279  644218 ssh_runner.go:195] Run: cat /version.json
	I0210 14:01:01.230308  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHHostname
	I0210 14:01:01.232882  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233201  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.233244  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233265  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233380  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.233549  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.233732  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:01.233765  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.233760  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:01.233914  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHPort
	I0210 14:01:01.233972  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.234062  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHKeyPath
	I0210 14:01:01.234210  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetSSHUsername
	I0210 14:01:01.234379  644218 sshutil.go:53] new ssh client: &{IP:192.168.72.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/old-k8s-version-643105/id_rsa Username:docker}
	I0210 14:01:01.309536  644218 ssh_runner.go:195] Run: systemctl --version
	I0210 14:01:01.334102  644218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 14:01:01.486141  644218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 14:01:01.492934  644218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 14:01:01.493017  644218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 14:01:01.512726  644218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 14:01:01.512760  644218 start.go:495] detecting cgroup driver to use...
	I0210 14:01:01.512824  644218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 14:01:01.530256  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 14:01:01.545115  644218 docker.go:217] disabling cri-docker service (if available) ...
	I0210 14:01:01.545186  644218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 14:01:01.563057  644218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 14:01:01.578117  644218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 14:01:01.694843  644218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 14:01:01.827391  644218 docker.go:233] disabling docker service ...
	I0210 14:01:01.827476  644218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 14:01:01.843342  644218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 14:01:01.857886  644218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 14:01:01.992715  644218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 14:01:02.114653  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 14:01:02.129432  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 14:01:02.149788  644218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0210 14:01:02.149895  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.161677  644218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 14:01:02.161759  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.172851  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.183669  644218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:01:02.194818  644218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 14:01:02.205759  644218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 14:01:02.215660  644218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 14:01:02.215706  644218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 14:01:02.230109  644218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 14:01:02.240154  644218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:01:02.371171  644218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 14:01:02.470149  644218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 14:01:02.470240  644218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 14:01:02.475602  644218 start.go:563] Will wait 60s for crictl version
	I0210 14:01:02.475664  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:02.480049  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 14:01:02.520068  644218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 14:01:02.520185  644218 ssh_runner.go:195] Run: crio --version
	I0210 14:01:02.551045  644218 ssh_runner.go:195] Run: crio --version
	I0210 14:01:02.580931  644218 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0210 14:01:02.582157  644218 main.go:141] libmachine: (old-k8s-version-643105) Calling .GetIP
	I0210 14:01:02.584852  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:02.585284  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:ed:f5", ip: ""} in network mk-old-k8s-version-643105: {Iface:virbr3 ExpiryTime:2025-02-10 15:00:53 +0000 UTC Type:0 Mac:52:54:00:de:ed:f5 Iaid: IPaddr:192.168.72.78 Prefix:24 Hostname:old-k8s-version-643105 Clientid:01:52:54:00:de:ed:f5}
	I0210 14:01:02.585304  644218 main.go:141] libmachine: (old-k8s-version-643105) DBG | domain old-k8s-version-643105 has defined IP address 192.168.72.78 and MAC address 52:54:00:de:ed:f5 in network mk-old-k8s-version-643105
	I0210 14:01:02.585561  644218 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0210 14:01:02.590450  644218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:01:02.604324  644218 kubeadm.go:883] updating cluster {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 14:01:02.604467  644218 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 14:01:02.604516  644218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:01:02.652623  644218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 14:01:02.652686  644218 ssh_runner.go:195] Run: which lz4
	I0210 14:01:02.656943  644218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 14:01:02.661500  644218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 14:01:02.661534  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0210 14:01:04.339580  644218 crio.go:462] duration metric: took 1.682671792s to copy over tarball
	I0210 14:01:04.339684  644218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 14:01:07.350309  644218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.010577091s)
	I0210 14:01:07.350351  644218 crio.go:469] duration metric: took 3.010729902s to extract the tarball
	I0210 14:01:07.350361  644218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 14:01:07.395580  644218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:01:07.429452  644218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0210 14:01:07.429482  644218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0210 14:01:07.429570  644218 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:07.429600  644218 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.429606  644218 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.429571  644218 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.429634  644218 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.429647  644218 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.429597  644218 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.429724  644218 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0210 14:01:07.431438  644218 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.431487  644218 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.431493  644218 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.431504  644218 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0210 14:01:07.431438  644218 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:07.431443  644218 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.431511  644218 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.431613  644218 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.615291  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0210 14:01:07.623050  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.638086  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.652614  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.659368  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.667259  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.674953  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.742829  644218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0210 14:01:07.742919  644218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.742979  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.743280  644218 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0210 14:01:07.743320  644218 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0210 14:01:07.743365  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.783732  644218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0210 14:01:07.783792  644218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.783839  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.825251  644218 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0210 14:01:07.825316  644218 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.825371  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.831958  644218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0210 14:01:07.832006  644218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.832057  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832062  644218 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0210 14:01:07.832097  644218 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.832099  644218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0210 14:01:07.832131  644218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.832142  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832161  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.832167  644218 ssh_runner.go:195] Run: which crictl
	I0210 14:01:07.832168  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:07.832201  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.832291  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.836691  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:07.942019  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:07.947733  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:07.947838  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:07.955245  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:07.955328  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:07.955351  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:07.960018  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:08.070966  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0210 14:01:08.126839  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:08.126913  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0210 14:01:08.127415  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0210 14:01:08.131979  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0210 14:01:08.132020  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:08.132080  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0210 14:01:08.209596  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0210 14:01:08.267603  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0210 14:01:08.269564  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0210 14:01:08.275411  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0210 14:01:08.282282  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0210 14:01:08.294152  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0210 14:01:08.294240  644218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0210 14:01:08.325700  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0210 14:01:08.345419  644218 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0210 14:01:08.523550  644218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:01:08.667959  644218 cache_images.go:92] duration metric: took 1.238457309s to LoadCachedImages
	W0210 14:01:08.668089  644218 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20390-580861/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0210 14:01:08.668109  644218 kubeadm.go:934] updating node { 192.168.72.78 8443 v1.20.0 crio true true} ...
	I0210 14:01:08.668302  644218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-643105 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 14:01:08.668409  644218 ssh_runner.go:195] Run: crio config
	I0210 14:01:08.722011  644218 cni.go:84] Creating CNI manager for ""
	I0210 14:01:08.722036  644218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:01:08.722084  644218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 14:01:08.722108  644218 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.78 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-643105 NodeName:old-k8s-version-643105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 14:01:08.722252  644218 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-643105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 14:01:08.722318  644218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 14:01:08.733118  644218 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 14:01:08.733210  644218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 14:01:08.743915  644218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0210 14:01:08.763793  644218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 14:01:08.783491  644218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0210 14:01:08.803659  644218 ssh_runner.go:195] Run: grep 192.168.72.78	control-plane.minikube.internal$ /etc/hosts
	I0210 14:01:08.808218  644218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:01:08.822404  644218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:01:08.942076  644218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:01:08.960541  644218 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105 for IP: 192.168.72.78
	I0210 14:01:08.960571  644218 certs.go:194] generating shared ca certs ...
	I0210 14:01:08.960594  644218 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:01:08.960813  644218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 14:01:08.960874  644218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 14:01:08.960887  644218 certs.go:256] generating profile certs ...
	I0210 14:01:08.961019  644218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/client.key
	I0210 14:01:08.961097  644218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key.2b43ede7
	I0210 14:01:08.961152  644218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key
	I0210 14:01:08.961318  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 14:01:08.961360  644218 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 14:01:08.961375  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 14:01:08.961405  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 14:01:08.961438  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 14:01:08.961471  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 14:01:08.961526  644218 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:01:08.962236  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 14:01:09.002999  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 14:01:09.042607  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 14:01:09.078020  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 14:01:09.105717  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 14:01:09.132990  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 14:01:09.159931  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 14:01:09.188143  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/old-k8s-version-643105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 14:01:09.227520  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 14:01:09.257228  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 14:01:09.282623  644218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 14:01:09.306810  644218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 14:01:09.325730  644218 ssh_runner.go:195] Run: openssl version
	I0210 14:01:09.332234  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 14:01:09.346330  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.351353  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.351419  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:01:09.358262  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 14:01:09.370517  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 14:01:09.382204  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.386897  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.386964  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 14:01:09.392847  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 14:01:09.404611  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 14:01:09.416794  644218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.421929  644218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.422001  644218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 14:01:09.428502  644218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 14:01:09.440486  644218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 14:01:09.445440  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 14:01:09.451749  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 14:01:09.458986  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 14:01:09.465394  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 14:01:09.472248  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 14:01:09.479629  644218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 14:01:09.486700  644218 kubeadm.go:392] StartCluster: {Name:old-k8s-version-643105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-643105 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.78 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:01:09.486817  644218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 14:01:09.486888  644218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:01:09.527393  644218 cri.go:89] found id: ""
	I0210 14:01:09.527468  644218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 14:01:09.538292  644218 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 14:01:09.538316  644218 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 14:01:09.538361  644218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 14:01:09.548788  644218 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 14:01:09.549897  644218 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-643105" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:01:09.550478  644218 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-643105" cluster setting kubeconfig missing "old-k8s-version-643105" context setting]
	I0210 14:01:09.551355  644218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:01:09.595572  644218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 14:01:09.608048  644218 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.78
	I0210 14:01:09.608087  644218 kubeadm.go:1160] stopping kube-system containers ...
	I0210 14:01:09.608107  644218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 14:01:09.608167  644218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:01:09.652676  644218 cri.go:89] found id: ""
	I0210 14:01:09.652766  644218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 14:01:09.670953  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:01:09.683380  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:01:09.683403  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:01:09.683452  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:01:09.694551  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:01:09.694611  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:01:09.705237  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:01:09.715066  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:01:09.715145  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:01:09.726566  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:01:09.737269  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:01:09.737352  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:01:09.748364  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:01:09.760127  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:01:09.760192  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:01:09.772077  644218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:01:09.782590  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:09.933455  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:10.817736  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.047055  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.146436  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:01:11.243309  644218 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:01:11.243404  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:11.744192  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:12.244363  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:12.743801  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:13.243553  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:13.744474  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:14.243523  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:14.744173  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:15.243867  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:15.743694  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:16.244417  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:16.743628  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:17.244040  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:17.744421  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:18.244035  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:18.744414  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:19.244475  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:19.743804  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:20.244513  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:20.743606  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:21.244269  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:21.744442  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:22.244379  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:22.743484  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:23.243994  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:23.744178  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:24.244394  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:24.744175  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:25.244420  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:25.744476  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:26.243537  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:26.744334  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:27.244400  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:27.743573  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:28.244521  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:28.743721  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:29.244304  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:29.744265  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:30.243673  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:30.744121  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:31.243493  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:31.744306  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:32.244304  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:32.743525  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:33.244550  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:33.743639  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:34.244395  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:34.744112  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:35.244321  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:35.743570  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:36.244179  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:36.744400  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:37.244130  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:37.743892  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:38.243746  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:38.743772  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:39.244330  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:39.743916  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:40.243566  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:40.743846  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:41.243608  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:41.743950  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:42.244397  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:42.744118  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:43.244417  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:43.744172  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:44.243711  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:44.743862  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:45.243727  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:45.743873  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:46.244115  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:46.743788  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:47.244429  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:47.743614  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:48.244349  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:48.743552  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:49.243815  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:49.744369  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:50.243839  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:50.743533  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:51.244507  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:51.744137  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:52.244106  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:52.744366  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:53.244035  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:53.744155  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:54.243661  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:54.744106  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:55.244495  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:55.744433  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:56.244154  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:56.744508  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:57.244475  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:57.743886  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:58.243572  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:58.744414  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:59.244367  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:01:59.743561  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:00.243790  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:00.743903  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:01.243740  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:01.744269  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:02.244119  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:02.743871  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:03.243921  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:03.744410  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:04.243622  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:04.744443  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:05.244122  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:05.744007  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:06.244161  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:06.743692  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:07.244335  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:07.743959  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:08.243492  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:08.743587  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:09.244176  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:09.744483  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:10.243822  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:10.744008  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:11.244385  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:11.244471  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:11.286364  644218 cri.go:89] found id: ""
	I0210 14:02:11.286393  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.286405  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:11.286417  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:11.286475  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:11.329994  644218 cri.go:89] found id: ""
	I0210 14:02:11.330022  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.330051  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:11.330059  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:11.330138  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:11.367663  644218 cri.go:89] found id: ""
	I0210 14:02:11.367695  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.367705  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:11.367712  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:11.367768  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:11.403264  644218 cri.go:89] found id: ""
	I0210 14:02:11.403304  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.403316  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:11.403325  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:11.403394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:11.440492  644218 cri.go:89] found id: ""
	I0210 14:02:11.440526  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.440538  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:11.440547  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:11.440613  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:11.476373  644218 cri.go:89] found id: ""
	I0210 14:02:11.476405  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.476415  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:11.476423  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:11.476488  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:11.514207  644218 cri.go:89] found id: ""
	I0210 14:02:11.514240  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.514248  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:11.514255  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:11.514306  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:11.549693  644218 cri.go:89] found id: ""
	I0210 14:02:11.549728  644218 logs.go:282] 0 containers: []
	W0210 14:02:11.549739  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:11.549759  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:11.549776  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:11.562981  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:11.563007  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:11.693788  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:11.693815  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:11.693828  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:11.764272  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:11.764318  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:11.806070  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:11.806099  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:14.358810  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:14.372745  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:14.372832  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:14.409693  644218 cri.go:89] found id: ""
	I0210 14:02:14.409725  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.409736  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:14.409746  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:14.409824  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:14.453067  644218 cri.go:89] found id: ""
	I0210 14:02:14.453102  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.453111  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:14.453118  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:14.453203  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:14.492519  644218 cri.go:89] found id: ""
	I0210 14:02:14.492546  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.492554  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:14.492560  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:14.492640  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:14.529288  644218 cri.go:89] found id: ""
	I0210 14:02:14.529322  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.529332  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:14.529340  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:14.529408  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:14.575092  644218 cri.go:89] found id: ""
	I0210 14:02:14.575123  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.575132  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:14.575138  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:14.575211  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:14.621654  644218 cri.go:89] found id: ""
	I0210 14:02:14.621679  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.621690  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:14.621699  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:14.621761  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:14.664478  644218 cri.go:89] found id: ""
	I0210 14:02:14.664506  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.664513  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:14.664519  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:14.664572  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:14.710019  644218 cri.go:89] found id: ""
	I0210 14:02:14.710054  644218 logs.go:282] 0 containers: []
	W0210 14:02:14.710063  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:14.710073  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:14.710087  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:14.762929  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:14.762970  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:14.776939  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:14.776968  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:14.848342  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:14.848365  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:14.848381  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:14.922486  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:14.922535  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:17.466274  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:17.480332  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:17.480412  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:17.518257  644218 cri.go:89] found id: ""
	I0210 14:02:17.518290  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.518302  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:17.518311  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:17.518372  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:17.553779  644218 cri.go:89] found id: ""
	I0210 14:02:17.553806  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.553814  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:17.553826  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:17.553882  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:17.595478  644218 cri.go:89] found id: ""
	I0210 14:02:17.595529  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.595538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:17.595545  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:17.595615  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:17.632549  644218 cri.go:89] found id: ""
	I0210 14:02:17.632574  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.632582  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:17.632588  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:17.632650  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:17.667748  644218 cri.go:89] found id: ""
	I0210 14:02:17.667779  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.667788  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:17.667794  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:17.667867  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:17.702855  644218 cri.go:89] found id: ""
	I0210 14:02:17.702891  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.702903  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:17.702911  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:17.702980  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:17.735604  644218 cri.go:89] found id: ""
	I0210 14:02:17.735635  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.735644  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:17.735651  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:17.735718  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:17.770407  644218 cri.go:89] found id: ""
	I0210 14:02:17.770441  644218 logs.go:282] 0 containers: []
	W0210 14:02:17.770465  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:17.770479  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:17.770505  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:17.850219  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:17.850247  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:17.850266  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:17.930615  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:17.930665  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:17.976840  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:17.976878  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:18.030287  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:18.030334  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:20.547098  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:20.568343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:20.568418  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:20.617065  644218 cri.go:89] found id: ""
	I0210 14:02:20.617117  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.617129  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:20.617142  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:20.617216  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:20.666206  644218 cri.go:89] found id: ""
	I0210 14:02:20.666242  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.666254  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:20.666261  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:20.666342  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:20.702778  644218 cri.go:89] found id: ""
	I0210 14:02:20.702813  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.702826  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:20.702834  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:20.702894  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:20.738798  644218 cri.go:89] found id: ""
	I0210 14:02:20.738825  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.738835  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:20.738844  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:20.738916  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:20.779218  644218 cri.go:89] found id: ""
	I0210 14:02:20.779251  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.779270  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:20.779279  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:20.779347  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:20.817485  644218 cri.go:89] found id: ""
	I0210 14:02:20.817519  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.817535  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:20.817546  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:20.817620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:20.853588  644218 cri.go:89] found id: ""
	I0210 14:02:20.853622  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.853672  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:20.853679  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:20.853738  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:20.889051  644218 cri.go:89] found id: ""
	I0210 14:02:20.889088  644218 logs.go:282] 0 containers: []
	W0210 14:02:20.889120  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:20.889134  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:20.889148  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:20.940039  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:20.940084  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:20.954579  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:20.954608  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:21.024304  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:21.024332  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:21.024346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:21.101726  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:21.101774  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:23.647432  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:23.660624  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:23.660713  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:23.701064  644218 cri.go:89] found id: ""
	I0210 14:02:23.701094  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.701102  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:23.701108  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:23.701162  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:23.735230  644218 cri.go:89] found id: ""
	I0210 14:02:23.735258  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.735266  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:23.735272  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:23.735328  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:23.770242  644218 cri.go:89] found id: ""
	I0210 14:02:23.770273  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.770282  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:23.770291  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:23.770361  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:23.807768  644218 cri.go:89] found id: ""
	I0210 14:02:23.807802  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.807815  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:23.807823  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:23.807896  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:23.844969  644218 cri.go:89] found id: ""
	I0210 14:02:23.845006  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.845018  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:23.845032  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:23.845105  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:23.880080  644218 cri.go:89] found id: ""
	I0210 14:02:23.880119  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.880131  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:23.880138  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:23.880217  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:23.926799  644218 cri.go:89] found id: ""
	I0210 14:02:23.926835  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.926843  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:23.926850  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:23.926907  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:23.967286  644218 cri.go:89] found id: ""
	I0210 14:02:23.967320  644218 logs.go:282] 0 containers: []
	W0210 14:02:23.967332  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:23.967347  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:23.967364  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:24.045745  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:24.045798  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:24.089243  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:24.089276  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:24.138300  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:24.138342  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:24.154534  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:24.154582  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:24.227255  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:26.728927  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:26.743363  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:26.743447  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:26.779325  644218 cri.go:89] found id: ""
	I0210 14:02:26.779362  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.779375  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:26.779383  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:26.779450  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:26.816861  644218 cri.go:89] found id: ""
	I0210 14:02:26.816894  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.816906  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:26.816952  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:26.817029  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:26.860520  644218 cri.go:89] found id: ""
	I0210 14:02:26.860552  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.860561  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:26.860568  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:26.860637  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:26.898009  644218 cri.go:89] found id: ""
	I0210 14:02:26.898044  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.898055  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:26.898064  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:26.898136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:26.931901  644218 cri.go:89] found id: ""
	I0210 14:02:26.931939  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.931958  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:26.931968  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:26.932045  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:26.975597  644218 cri.go:89] found id: ""
	I0210 14:02:26.975625  644218 logs.go:282] 0 containers: []
	W0210 14:02:26.975633  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:26.975640  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:26.975695  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:27.012995  644218 cri.go:89] found id: ""
	I0210 14:02:27.013029  644218 logs.go:282] 0 containers: []
	W0210 14:02:27.013040  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:27.013048  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:27.013116  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:27.050318  644218 cri.go:89] found id: ""
	I0210 14:02:27.050346  644218 logs.go:282] 0 containers: []
	W0210 14:02:27.050354  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:27.050364  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:27.050377  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:27.102947  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:27.102983  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:27.117768  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:27.117815  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:27.186683  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:27.186707  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:27.186721  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:27.267129  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:27.267166  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:29.811859  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:29.825046  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:29.825142  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:29.861264  644218 cri.go:89] found id: ""
	I0210 14:02:29.861303  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.861316  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:29.861324  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:29.861397  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:29.900434  644218 cri.go:89] found id: ""
	I0210 14:02:29.900464  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.900472  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:29.900479  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:29.900542  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:29.937412  644218 cri.go:89] found id: ""
	I0210 14:02:29.937442  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.937454  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:29.937461  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:29.937545  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:29.978051  644218 cri.go:89] found id: ""
	I0210 14:02:29.978082  644218 logs.go:282] 0 containers: []
	W0210 14:02:29.978092  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:29.978099  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:29.978166  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:30.017678  644218 cri.go:89] found id: ""
	I0210 14:02:30.017766  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.017782  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:30.017791  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:30.017860  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:30.059305  644218 cri.go:89] found id: ""
	I0210 14:02:30.059336  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.059346  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:30.059355  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:30.059425  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:30.096690  644218 cri.go:89] found id: ""
	I0210 14:02:30.096736  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.096748  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:30.096757  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:30.096829  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:30.132812  644218 cri.go:89] found id: ""
	I0210 14:02:30.132846  644218 logs.go:282] 0 containers: []
	W0210 14:02:30.132855  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:30.132866  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:30.132883  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:30.186166  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:30.186208  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:30.202789  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:30.202827  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:30.278004  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:30.278031  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:30.278049  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:30.366990  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:30.367030  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:32.908509  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:32.921779  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:32.921856  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:32.962265  644218 cri.go:89] found id: ""
	I0210 14:02:32.962300  644218 logs.go:282] 0 containers: []
	W0210 14:02:32.962311  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:32.962319  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:32.962388  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:32.996492  644218 cri.go:89] found id: ""
	I0210 14:02:32.996524  644218 logs.go:282] 0 containers: []
	W0210 14:02:32.996537  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:32.996544  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:32.996611  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:33.033211  644218 cri.go:89] found id: ""
	I0210 14:02:33.033251  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.033265  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:33.033274  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:33.033345  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:33.067479  644218 cri.go:89] found id: ""
	I0210 14:02:33.067517  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.067528  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:33.067537  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:33.067631  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:33.105719  644218 cri.go:89] found id: ""
	I0210 14:02:33.105750  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.105761  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:33.105768  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:33.105836  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:33.145033  644218 cri.go:89] found id: ""
	I0210 14:02:33.145060  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.145067  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:33.145084  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:33.145135  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:33.180968  644218 cri.go:89] found id: ""
	I0210 14:02:33.180994  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.181003  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:33.181013  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:33.181071  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:33.216463  644218 cri.go:89] found id: ""
	I0210 14:02:33.216488  644218 logs.go:282] 0 containers: []
	W0210 14:02:33.216497  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:33.216507  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:33.216527  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:33.229839  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:33.229873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:33.302667  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:33.302694  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:33.302712  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:33.380724  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:33.380767  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:33.422940  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:33.422974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:35.980433  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:35.993639  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:35.993721  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:36.031302  644218 cri.go:89] found id: ""
	I0210 14:02:36.031338  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.031351  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:36.031360  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:36.031418  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:36.064362  644218 cri.go:89] found id: ""
	I0210 14:02:36.064396  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.064408  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:36.064417  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:36.064474  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:36.099393  644218 cri.go:89] found id: ""
	I0210 14:02:36.099422  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.099431  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:36.099438  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:36.099506  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:36.135921  644218 cri.go:89] found id: ""
	I0210 14:02:36.135952  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.135963  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:36.135972  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:36.136024  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:36.178044  644218 cri.go:89] found id: ""
	I0210 14:02:36.178073  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.178083  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:36.178091  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:36.178151  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:36.213320  644218 cri.go:89] found id: ""
	I0210 14:02:36.213350  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.213362  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:36.213369  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:36.213442  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:36.251431  644218 cri.go:89] found id: ""
	I0210 14:02:36.251457  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.251465  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:36.251474  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:36.251543  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:36.286389  644218 cri.go:89] found id: ""
	I0210 14:02:36.286421  644218 logs.go:282] 0 containers: []
	W0210 14:02:36.286432  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:36.286446  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:36.286463  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:36.300293  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:36.300323  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:36.373240  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:36.373265  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:36.373283  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:36.455529  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:36.455574  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:36.497953  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:36.497994  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:39.051048  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:39.063906  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:39.064003  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:39.097633  644218 cri.go:89] found id: ""
	I0210 14:02:39.097669  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.097681  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:39.097690  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:39.097759  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:39.133312  644218 cri.go:89] found id: ""
	I0210 14:02:39.133341  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.133353  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:39.133360  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:39.133425  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:39.170137  644218 cri.go:89] found id: ""
	I0210 14:02:39.170169  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.170180  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:39.170188  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:39.170257  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:39.204690  644218 cri.go:89] found id: ""
	I0210 14:02:39.204722  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.204731  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:39.204738  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:39.204792  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:39.241064  644218 cri.go:89] found id: ""
	I0210 14:02:39.241094  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.241102  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:39.241119  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:39.241178  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:39.279602  644218 cri.go:89] found id: ""
	I0210 14:02:39.279630  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.279638  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:39.279644  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:39.279697  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:39.328061  644218 cri.go:89] found id: ""
	I0210 14:02:39.328089  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.328097  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:39.328105  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:39.328177  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:39.365418  644218 cri.go:89] found id: ""
	I0210 14:02:39.365447  644218 logs.go:282] 0 containers: []
	W0210 14:02:39.365456  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:39.365467  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:39.365478  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:39.418099  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:39.418135  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:39.432723  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:39.432763  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:39.502112  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:39.502144  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:39.502177  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:39.579038  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:39.579088  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:42.122820  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:42.135832  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:42.135904  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:42.170673  644218 cri.go:89] found id: ""
	I0210 14:02:42.170713  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.170726  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:42.170735  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:42.170809  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:42.204257  644218 cri.go:89] found id: ""
	I0210 14:02:42.204303  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.204312  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:42.204319  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:42.204383  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:42.238954  644218 cri.go:89] found id: ""
	I0210 14:02:42.238987  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.238999  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:42.239007  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:42.239079  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:42.273753  644218 cri.go:89] found id: ""
	I0210 14:02:42.273784  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.273793  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:42.273800  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:42.273852  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:42.305964  644218 cri.go:89] found id: ""
	I0210 14:02:42.305989  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.305997  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:42.306003  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:42.306055  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:42.340601  644218 cri.go:89] found id: ""
	I0210 14:02:42.340635  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.340645  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:42.340654  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:42.340723  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:42.378707  644218 cri.go:89] found id: ""
	I0210 14:02:42.378743  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.378755  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:42.378765  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:42.378836  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:42.418150  644218 cri.go:89] found id: ""
	I0210 14:02:42.418187  644218 logs.go:282] 0 containers: []
	W0210 14:02:42.418199  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:42.418214  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:42.418238  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:42.432129  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:42.432171  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:42.501810  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:42.501841  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:42.501862  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:42.576752  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:42.576797  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:42.616411  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:42.616441  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:45.171596  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:45.184429  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:45.184514  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:45.219366  644218 cri.go:89] found id: ""
	I0210 14:02:45.219398  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.219410  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:45.219419  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:45.219488  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:45.255638  644218 cri.go:89] found id: ""
	I0210 14:02:45.255670  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.255679  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:45.255685  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:45.255739  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:45.290092  644218 cri.go:89] found id: ""
	I0210 14:02:45.290126  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.290135  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:45.290141  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:45.290207  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:45.327283  644218 cri.go:89] found id: ""
	I0210 14:02:45.327311  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.327320  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:45.327326  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:45.327393  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:45.362888  644218 cri.go:89] found id: ""
	I0210 14:02:45.362929  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.362940  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:45.362949  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:45.363019  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:45.398844  644218 cri.go:89] found id: ""
	I0210 14:02:45.398875  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.398884  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:45.398891  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:45.398947  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:45.434994  644218 cri.go:89] found id: ""
	I0210 14:02:45.435028  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.435040  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:45.435049  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:45.435124  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:45.471469  644218 cri.go:89] found id: ""
	I0210 14:02:45.471500  644218 logs.go:282] 0 containers: []
	W0210 14:02:45.471511  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:45.471526  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:45.471544  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:45.555817  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:45.555860  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:45.597427  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:45.597458  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:45.651433  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:45.651471  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:45.665662  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:45.665691  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:45.733400  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:48.233572  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:48.246787  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:48.246865  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:48.282005  644218 cri.go:89] found id: ""
	I0210 14:02:48.282031  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.282040  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:48.282046  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:48.282122  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:48.320510  644218 cri.go:89] found id: ""
	I0210 14:02:48.320542  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.320553  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:48.320569  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:48.320640  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:48.360959  644218 cri.go:89] found id: ""
	I0210 14:02:48.360988  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.360997  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:48.361004  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:48.361056  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:48.399784  644218 cri.go:89] found id: ""
	I0210 14:02:48.399814  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.399825  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:48.399832  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:48.399897  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:48.435401  644218 cri.go:89] found id: ""
	I0210 14:02:48.435433  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.435443  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:48.435451  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:48.435515  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:48.470377  644218 cri.go:89] found id: ""
	I0210 14:02:48.470410  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.470423  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:48.470431  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:48.470501  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:48.513766  644218 cri.go:89] found id: ""
	I0210 14:02:48.513803  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.513812  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:48.513818  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:48.513881  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:48.548542  644218 cri.go:89] found id: ""
	I0210 14:02:48.548574  644218 logs.go:282] 0 containers: []
	W0210 14:02:48.548587  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:48.548599  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:48.548614  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:48.599918  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:48.599954  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:48.614533  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:48.614577  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:48.694464  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:48.694499  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:48.694518  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:48.775406  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:48.775469  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:51.327037  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:51.339986  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:51.340076  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:51.375772  644218 cri.go:89] found id: ""
	I0210 14:02:51.375801  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.375812  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:51.375821  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:51.375885  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:51.414590  644218 cri.go:89] found id: ""
	I0210 14:02:51.414617  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.414626  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:51.414636  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:51.414696  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:51.454903  644218 cri.go:89] found id: ""
	I0210 14:02:51.454934  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.454943  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:51.454952  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:51.455020  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:51.493095  644218 cri.go:89] found id: ""
	I0210 14:02:51.493119  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.493127  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:51.493133  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:51.493185  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:51.529308  644218 cri.go:89] found id: ""
	I0210 14:02:51.529337  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.529345  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:51.529351  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:51.529409  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:51.567667  644218 cri.go:89] found id: ""
	I0210 14:02:51.567692  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.567701  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:51.567708  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:51.567764  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:51.606199  644218 cri.go:89] found id: ""
	I0210 14:02:51.606240  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.606252  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:51.606259  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:51.606326  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:51.639401  644218 cri.go:89] found id: ""
	I0210 14:02:51.639438  644218 logs.go:282] 0 containers: []
	W0210 14:02:51.639451  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:51.639466  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:51.639483  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:51.676250  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:51.676315  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:51.727512  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:51.727556  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:51.744257  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:51.744314  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:51.819189  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:51.819220  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:51.819239  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:54.397008  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:54.426335  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:54.426398  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:54.461194  644218 cri.go:89] found id: ""
	I0210 14:02:54.461230  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.461239  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:54.461245  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:54.461308  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:54.498546  644218 cri.go:89] found id: ""
	I0210 14:02:54.498574  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.498583  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:54.498591  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:54.498668  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:54.534427  644218 cri.go:89] found id: ""
	I0210 14:02:54.534459  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.534471  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:54.534480  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:54.534536  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:54.570856  644218 cri.go:89] found id: ""
	I0210 14:02:54.570888  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.570898  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:54.570907  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:54.570986  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:54.609274  644218 cri.go:89] found id: ""
	I0210 14:02:54.609316  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.609329  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:54.609339  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:54.609394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:54.650978  644218 cri.go:89] found id: ""
	I0210 14:02:54.651012  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.651024  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:54.651032  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:54.651103  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:54.694455  644218 cri.go:89] found id: ""
	I0210 14:02:54.694486  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.694494  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:54.694500  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:54.694565  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:54.734916  644218 cri.go:89] found id: ""
	I0210 14:02:54.734944  644218 logs.go:282] 0 containers: []
	W0210 14:02:54.734954  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:54.734969  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:54.734985  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:02:54.781320  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:54.781365  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:54.839551  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:54.839592  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:54.856166  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:54.856198  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:54.937073  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:54.937095  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:54.937108  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:57.515561  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:02:57.529013  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:02:57.529077  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:02:57.566030  644218 cri.go:89] found id: ""
	I0210 14:02:57.566072  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.566083  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:02:57.566092  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:02:57.566165  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:02:57.601983  644218 cri.go:89] found id: ""
	I0210 14:02:57.602020  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.602033  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:02:57.602047  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:02:57.602115  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:02:57.641798  644218 cri.go:89] found id: ""
	I0210 14:02:57.641830  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.641840  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:02:57.641848  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:02:57.641918  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:02:57.677360  644218 cri.go:89] found id: ""
	I0210 14:02:57.677392  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.677405  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:02:57.677414  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:02:57.677482  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:02:57.714634  644218 cri.go:89] found id: ""
	I0210 14:02:57.714667  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.714678  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:02:57.714685  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:02:57.714751  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:02:57.755338  644218 cri.go:89] found id: ""
	I0210 14:02:57.755371  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.755383  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:02:57.755392  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:02:57.755457  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:02:57.792621  644218 cri.go:89] found id: ""
	I0210 14:02:57.792658  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.792672  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:02:57.792690  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:02:57.792753  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:02:57.829844  644218 cri.go:89] found id: ""
	I0210 14:02:57.829879  644218 logs.go:282] 0 containers: []
	W0210 14:02:57.829892  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:02:57.829907  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:02:57.829932  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:02:57.885425  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:02:57.885462  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:02:57.899815  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:02:57.899847  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:02:57.970164  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:02:57.970193  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:02:57.970208  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:02:58.050373  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:02:58.050415  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:00.595884  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:00.609913  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:00.610000  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:00.649124  644218 cri.go:89] found id: ""
	I0210 14:03:00.649158  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.649169  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:00.649178  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:00.649252  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:00.686014  644218 cri.go:89] found id: ""
	I0210 14:03:00.686048  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.686058  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:00.686066  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:00.686124  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:00.720878  644218 cri.go:89] found id: ""
	I0210 14:03:00.720908  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.720917  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:00.720924  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:00.720991  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:00.756490  644218 cri.go:89] found id: ""
	I0210 14:03:00.756515  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.756524  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:00.756530  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:00.756581  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:00.804539  644218 cri.go:89] found id: ""
	I0210 14:03:00.804572  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.804583  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:00.804590  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:00.804658  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:00.858778  644218 cri.go:89] found id: ""
	I0210 14:03:00.858811  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.858820  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:00.858828  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:00.858895  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:00.913535  644218 cri.go:89] found id: ""
	I0210 14:03:00.913564  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.913572  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:00.913578  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:00.913642  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:00.959513  644218 cri.go:89] found id: ""
	I0210 14:03:00.959545  644218 logs.go:282] 0 containers: []
	W0210 14:03:00.959556  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:00.959569  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:00.959587  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:01.016776  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:01.016821  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:01.033429  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:01.033464  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:01.118266  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:01.118287  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:01.118303  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:01.205884  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:01.205937  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:03.753520  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:03.767719  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:03.767790  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:03.802499  644218 cri.go:89] found id: ""
	I0210 14:03:03.802531  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.802542  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:03.802552  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:03.802625  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:03.836771  644218 cri.go:89] found id: ""
	I0210 14:03:03.836808  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.836818  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:03.836824  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:03.836915  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:03.872213  644218 cri.go:89] found id: ""
	I0210 14:03:03.872241  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.872249  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:03.872256  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:03.872321  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:03.907698  644218 cri.go:89] found id: ""
	I0210 14:03:03.907739  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.907751  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:03.907759  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:03.907833  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:03.944625  644218 cri.go:89] found id: ""
	I0210 14:03:03.944655  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.944662  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:03.944668  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:03.944737  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:03.983758  644218 cri.go:89] found id: ""
	I0210 14:03:03.983784  644218 logs.go:282] 0 containers: []
	W0210 14:03:03.983794  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:03.983803  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:03.983888  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:04.019244  644218 cri.go:89] found id: ""
	I0210 14:03:04.019272  644218 logs.go:282] 0 containers: []
	W0210 14:03:04.019280  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:04.019286  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:04.019347  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:04.055800  644218 cri.go:89] found id: ""
	I0210 14:03:04.055831  644218 logs.go:282] 0 containers: []
	W0210 14:03:04.055840  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:04.055850  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:04.055865  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:04.124940  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:04.124968  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:04.124981  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:04.198549  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:04.198589  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:04.242831  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:04.242864  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:04.294003  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:04.294040  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:06.810538  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:06.825419  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:06.825505  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:06.860135  644218 cri.go:89] found id: ""
	I0210 14:03:06.860176  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.860186  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:06.860206  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:06.860262  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:06.896110  644218 cri.go:89] found id: ""
	I0210 14:03:06.896142  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.896151  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:06.896172  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:06.896227  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:06.931936  644218 cri.go:89] found id: ""
	I0210 14:03:06.931965  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.931975  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:06.931982  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:06.932039  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:06.968502  644218 cri.go:89] found id: ""
	I0210 14:03:06.968529  644218 logs.go:282] 0 containers: []
	W0210 14:03:06.968537  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:06.968543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:06.968609  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:07.004172  644218 cri.go:89] found id: ""
	I0210 14:03:07.004201  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.004210  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:07.004224  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:07.004308  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:07.037806  644218 cri.go:89] found id: ""
	I0210 14:03:07.037845  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.037857  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:07.037866  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:07.037920  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:07.072468  644218 cri.go:89] found id: ""
	I0210 14:03:07.072502  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.072516  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:07.072524  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:07.072593  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:07.109513  644218 cri.go:89] found id: ""
	I0210 14:03:07.109544  644218 logs.go:282] 0 containers: []
	W0210 14:03:07.109554  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:07.109568  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:07.109585  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:07.162551  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:07.162589  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:07.176535  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:07.176563  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:07.246994  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:07.247029  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:07.247047  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:07.327563  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:07.327611  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:09.876047  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:09.889430  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:09.889512  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:09.922155  644218 cri.go:89] found id: ""
	I0210 14:03:09.922187  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.922199  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:09.922208  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:09.922284  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:09.957894  644218 cri.go:89] found id: ""
	I0210 14:03:09.957929  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.957941  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:09.957949  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:09.958014  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:09.992853  644218 cri.go:89] found id: ""
	I0210 14:03:09.992891  644218 logs.go:282] 0 containers: []
	W0210 14:03:09.992904  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:09.992919  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:09.992998  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:10.028929  644218 cri.go:89] found id: ""
	I0210 14:03:10.028962  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.028978  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:10.028987  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:10.029068  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:10.063936  644218 cri.go:89] found id: ""
	I0210 14:03:10.063982  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.063994  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:10.064003  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:10.064069  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:10.101754  644218 cri.go:89] found id: ""
	I0210 14:03:10.101786  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.101798  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:10.101806  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:10.101865  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:10.140910  644218 cri.go:89] found id: ""
	I0210 14:03:10.140937  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.140945  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:10.140951  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:10.141017  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:10.182602  644218 cri.go:89] found id: ""
	I0210 14:03:10.182629  644218 logs.go:282] 0 containers: []
	W0210 14:03:10.182638  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:10.182651  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:10.182670  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:10.196740  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:10.196776  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:10.269899  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:10.269925  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:10.269952  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:10.349425  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:10.349469  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:10.394256  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:10.394298  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:12.948555  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:12.962549  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:12.962658  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:12.998072  644218 cri.go:89] found id: ""
	I0210 14:03:12.998109  644218 logs.go:282] 0 containers: []
	W0210 14:03:12.998122  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:12.998130  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:12.998199  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:13.032802  644218 cri.go:89] found id: ""
	I0210 14:03:13.032842  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.032853  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:13.032859  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:13.032917  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:13.069970  644218 cri.go:89] found id: ""
	I0210 14:03:13.070006  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.070018  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:13.070026  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:13.070096  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:13.103870  644218 cri.go:89] found id: ""
	I0210 14:03:13.103908  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.103921  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:13.103930  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:13.103995  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:13.140166  644218 cri.go:89] found id: ""
	I0210 14:03:13.140202  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.140214  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:13.140222  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:13.140309  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:13.176097  644218 cri.go:89] found id: ""
	I0210 14:03:13.176134  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.176147  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:13.176157  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:13.176234  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:13.210605  644218 cri.go:89] found id: ""
	I0210 14:03:13.210636  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.210645  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:13.210651  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:13.210716  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:13.243129  644218 cri.go:89] found id: ""
	I0210 14:03:13.243159  644218 logs.go:282] 0 containers: []
	W0210 14:03:13.243168  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:13.243181  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:13.243207  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:13.296477  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:13.296519  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:13.310516  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:13.310547  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:13.382486  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:13.382516  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:13.382535  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:13.458590  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:13.458631  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:16.016166  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:16.030318  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:16.030390  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:16.068316  644218 cri.go:89] found id: ""
	I0210 14:03:16.068352  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.068360  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:16.068367  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:16.068422  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:16.104464  644218 cri.go:89] found id: ""
	I0210 14:03:16.104496  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.104505  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:16.104510  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:16.104622  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:16.143770  644218 cri.go:89] found id: ""
	I0210 14:03:16.143804  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.143816  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:16.143824  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:16.143894  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:16.179218  644218 cri.go:89] found id: ""
	I0210 14:03:16.179250  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.179259  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:16.179268  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:16.179323  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:16.221304  644218 cri.go:89] found id: ""
	I0210 14:03:16.221337  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.221346  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:16.221355  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:16.221407  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:16.257960  644218 cri.go:89] found id: ""
	I0210 14:03:16.257995  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.258005  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:16.258012  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:16.258064  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:16.292339  644218 cri.go:89] found id: ""
	I0210 14:03:16.292372  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.292383  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:16.292393  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:16.292463  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:16.326640  644218 cri.go:89] found id: ""
	I0210 14:03:16.326671  644218 logs.go:282] 0 containers: []
	W0210 14:03:16.326683  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:16.326696  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:16.326738  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:16.341765  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:16.341796  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:16.409145  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:16.409172  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:16.409187  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:16.483525  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:16.483568  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:16.523394  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:16.523430  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:19.074741  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:19.089545  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:19.089619  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:19.128499  644218 cri.go:89] found id: ""
	I0210 14:03:19.128532  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.128543  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:19.128552  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:19.128621  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:19.163249  644218 cri.go:89] found id: ""
	I0210 14:03:19.163288  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.163301  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:19.163309  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:19.163385  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:19.197204  644218 cri.go:89] found id: ""
	I0210 14:03:19.197242  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.197253  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:19.197261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:19.197329  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:19.232465  644218 cri.go:89] found id: ""
	I0210 14:03:19.232493  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.232501  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:19.232508  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:19.232577  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:19.266055  644218 cri.go:89] found id: ""
	I0210 14:03:19.266080  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.266088  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:19.266094  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:19.266150  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:19.300043  644218 cri.go:89] found id: ""
	I0210 14:03:19.300078  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.300088  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:19.300095  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:19.300158  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:19.336174  644218 cri.go:89] found id: ""
	I0210 14:03:19.336207  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.336220  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:19.336228  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:19.336322  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:19.371913  644218 cri.go:89] found id: ""
	I0210 14:03:19.371941  644218 logs.go:282] 0 containers: []
	W0210 14:03:19.371949  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:19.371959  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:19.371978  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:19.424785  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:19.424828  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:19.439128  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:19.439160  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:19.513243  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:19.513268  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:19.513285  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:19.591125  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:19.591170  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:22.132862  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:22.149797  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:22.149870  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:22.189682  644218 cri.go:89] found id: ""
	I0210 14:03:22.189707  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.189716  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:22.189722  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:22.189779  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:22.230353  644218 cri.go:89] found id: ""
	I0210 14:03:22.230386  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.230398  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:22.230407  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:22.230476  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:22.264639  644218 cri.go:89] found id: ""
	I0210 14:03:22.264673  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.264685  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:22.264693  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:22.264781  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:22.300462  644218 cri.go:89] found id: ""
	I0210 14:03:22.300497  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.300508  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:22.300517  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:22.300596  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:22.338620  644218 cri.go:89] found id: ""
	I0210 14:03:22.338652  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.338664  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:22.338672  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:22.338743  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:22.377041  644218 cri.go:89] found id: ""
	I0210 14:03:22.377073  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.377085  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:22.377093  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:22.377164  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:22.423792  644218 cri.go:89] found id: ""
	I0210 14:03:22.423815  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.423822  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:22.423829  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:22.423901  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:22.466237  644218 cri.go:89] found id: ""
	I0210 14:03:22.466268  644218 logs.go:282] 0 containers: []
	W0210 14:03:22.466282  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:22.466293  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:22.466307  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:22.519771  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:22.519815  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:22.534443  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:22.534489  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:22.625188  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:22.625210  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:22.625224  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:22.702516  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:22.702557  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:25.251508  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:25.265701  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:25.265765  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:25.300644  644218 cri.go:89] found id: ""
	I0210 14:03:25.300676  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.300688  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:25.300698  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:25.300778  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:25.337683  644218 cri.go:89] found id: ""
	I0210 14:03:25.337716  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.337727  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:25.337736  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:25.337804  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:25.371570  644218 cri.go:89] found id: ""
	I0210 14:03:25.371608  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.371620  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:25.371627  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:25.371706  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:25.410522  644218 cri.go:89] found id: ""
	I0210 14:03:25.410546  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.410554  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:25.410561  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:25.410625  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:25.459186  644218 cri.go:89] found id: ""
	I0210 14:03:25.459217  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.459229  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:25.459237  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:25.459300  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:25.496445  644218 cri.go:89] found id: ""
	I0210 14:03:25.496471  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.496479  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:25.496485  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:25.496546  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:25.540431  644218 cri.go:89] found id: ""
	I0210 14:03:25.540459  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.540469  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:25.540476  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:25.540551  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:25.591900  644218 cri.go:89] found id: ""
	I0210 14:03:25.591938  644218 logs.go:282] 0 containers: []
	W0210 14:03:25.591951  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:25.591966  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:25.591983  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:25.631755  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:25.631793  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:25.686052  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:25.686086  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:25.700599  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:25.700635  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:25.794403  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:25.794434  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:25.794451  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:28.383450  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:28.401653  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:28.401732  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:28.442549  644218 cri.go:89] found id: ""
	I0210 14:03:28.442586  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.442598  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:28.442628  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:28.442720  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:28.478500  644218 cri.go:89] found id: ""
	I0210 14:03:28.478533  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.478544  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:28.478553  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:28.478621  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:28.514256  644218 cri.go:89] found id: ""
	I0210 14:03:28.514285  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.514296  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:28.514304  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:28.514370  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:28.549500  644218 cri.go:89] found id: ""
	I0210 14:03:28.549542  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.549555  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:28.549565  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:28.549637  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:28.585046  644218 cri.go:89] found id: ""
	I0210 14:03:28.585082  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.585094  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:28.585103  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:28.585186  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:28.619974  644218 cri.go:89] found id: ""
	I0210 14:03:28.620006  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.620017  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:28.620026  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:28.620091  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:28.661084  644218 cri.go:89] found id: ""
	I0210 14:03:28.661118  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.661129  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:28.661138  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:28.661197  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:28.700310  644218 cri.go:89] found id: ""
	I0210 14:03:28.700347  644218 logs.go:282] 0 containers: []
	W0210 14:03:28.700358  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:28.700374  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:28.700391  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:28.804349  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:28.804401  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:28.849566  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:28.849596  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:28.915698  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:28.915746  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:28.936924  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:28.936972  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:29.014311  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:31.515140  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:31.528631  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:31.528715  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:31.565211  644218 cri.go:89] found id: ""
	I0210 14:03:31.565240  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.565247  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:31.565255  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:31.565316  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:31.612182  644218 cri.go:89] found id: ""
	I0210 14:03:31.612218  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.612230  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:31.612240  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:31.612330  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:31.652447  644218 cri.go:89] found id: ""
	I0210 14:03:31.652468  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.652474  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:31.652480  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:31.652528  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:31.693068  644218 cri.go:89] found id: ""
	I0210 14:03:31.693092  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.693102  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:31.693114  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:31.693161  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:31.737738  644218 cri.go:89] found id: ""
	I0210 14:03:31.737766  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.737777  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:31.737785  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:31.737857  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:31.779661  644218 cri.go:89] found id: ""
	I0210 14:03:31.779696  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.779708  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:31.779717  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:31.779782  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:31.817379  644218 cri.go:89] found id: ""
	I0210 14:03:31.817410  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.817421  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:31.817430  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:31.817479  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:31.857015  644218 cri.go:89] found id: ""
	I0210 14:03:31.857044  644218 logs.go:282] 0 containers: []
	W0210 14:03:31.857055  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:31.857067  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:31.857088  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:31.874642  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:31.874675  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:31.953612  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:31.953638  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:31.953650  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:32.042888  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:32.042929  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:32.091653  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:32.091691  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:34.667984  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:34.686536  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:34.686618  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:34.727820  644218 cri.go:89] found id: ""
	I0210 14:03:34.727858  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.727870  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:34.727880  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:34.727950  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:34.768760  644218 cri.go:89] found id: ""
	I0210 14:03:34.768788  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.768799  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:34.768808  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:34.768887  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:34.804040  644218 cri.go:89] found id: ""
	I0210 14:03:34.804076  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.804089  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:34.804098  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:34.804171  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:34.840334  644218 cri.go:89] found id: ""
	I0210 14:03:34.840361  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.840369  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:34.840375  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:34.840426  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:34.879125  644218 cri.go:89] found id: ""
	I0210 14:03:34.879159  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.879170  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:34.879179  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:34.879246  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:34.919130  644218 cri.go:89] found id: ""
	I0210 14:03:34.919156  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.919164  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:34.919171  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:34.919224  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:34.955694  644218 cri.go:89] found id: ""
	I0210 14:03:34.955725  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.955734  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:34.955740  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:34.955793  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:34.991102  644218 cri.go:89] found id: ""
	I0210 14:03:34.991136  644218 logs.go:282] 0 containers: []
	W0210 14:03:34.991148  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:34.991162  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:34.991186  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:35.006524  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:35.006557  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:35.082736  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:35.082771  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:35.082792  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:35.160254  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:35.160307  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:35.206382  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:35.206417  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:37.758300  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:37.772140  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:37.772211  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:37.807841  644218 cri.go:89] found id: ""
	I0210 14:03:37.807869  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.807879  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:37.807885  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:37.807936  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:37.843192  644218 cri.go:89] found id: ""
	I0210 14:03:37.843224  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.843234  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:37.843243  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:37.843315  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:37.878964  644218 cri.go:89] found id: ""
	I0210 14:03:37.878999  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.879011  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:37.879019  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:37.879094  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:37.915250  644218 cri.go:89] found id: ""
	I0210 14:03:37.915280  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.915291  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:37.915300  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:37.915369  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:37.952822  644218 cri.go:89] found id: ""
	I0210 14:03:37.952861  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.952875  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:37.952882  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:37.952941  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:37.992533  644218 cri.go:89] found id: ""
	I0210 14:03:37.992562  644218 logs.go:282] 0 containers: []
	W0210 14:03:37.992570  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:37.992578  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:37.992638  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:38.030651  644218 cri.go:89] found id: ""
	I0210 14:03:38.030682  644218 logs.go:282] 0 containers: []
	W0210 14:03:38.030694  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:38.030702  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:38.030767  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:38.071024  644218 cri.go:89] found id: ""
	I0210 14:03:38.071057  644218 logs.go:282] 0 containers: []
	W0210 14:03:38.071067  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:38.071081  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:38.071098  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:38.087739  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:38.087776  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:38.183960  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:38.183990  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:38.184009  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:38.272496  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:38.272553  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:38.320162  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:38.320206  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:40.890682  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:40.909447  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:40.909544  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:40.949563  644218 cri.go:89] found id: ""
	I0210 14:03:40.949602  644218 logs.go:282] 0 containers: []
	W0210 14:03:40.949614  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:40.949624  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:40.949692  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:40.990721  644218 cri.go:89] found id: ""
	I0210 14:03:40.990761  644218 logs.go:282] 0 containers: []
	W0210 14:03:40.990773  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:40.990784  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:40.990845  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:41.028234  644218 cri.go:89] found id: ""
	I0210 14:03:41.028273  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.028310  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:41.028319  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:41.028382  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:41.066207  644218 cri.go:89] found id: ""
	I0210 14:03:41.066244  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.066256  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:41.066265  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:41.066352  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:41.106957  644218 cri.go:89] found id: ""
	I0210 14:03:41.106999  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.107012  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:41.107021  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:41.107093  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:41.146265  644218 cri.go:89] found id: ""
	I0210 14:03:41.146300  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.146311  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:41.146327  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:41.146394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:41.189702  644218 cri.go:89] found id: ""
	I0210 14:03:41.189740  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.189751  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:41.189759  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:41.189829  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:41.226685  644218 cri.go:89] found id: ""
	I0210 14:03:41.226721  644218 logs.go:282] 0 containers: []
	W0210 14:03:41.226733  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:41.226747  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:41.226764  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:41.293405  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:41.293451  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:41.315359  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:41.315412  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:41.433768  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:41.433800  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:41.433817  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:41.526968  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:41.527010  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:44.076461  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:44.095543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:44.095627  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:44.147764  644218 cri.go:89] found id: ""
	I0210 14:03:44.147804  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.147815  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:44.147824  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:44.147899  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:44.196945  644218 cri.go:89] found id: ""
	I0210 14:03:44.196975  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.196985  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:44.196994  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:44.197071  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:44.237401  644218 cri.go:89] found id: ""
	I0210 14:03:44.237434  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.237444  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:44.237453  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:44.237536  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:44.280786  644218 cri.go:89] found id: ""
	I0210 14:03:44.280819  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.280830  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:44.280839  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:44.280904  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:44.331056  644218 cri.go:89] found id: ""
	I0210 14:03:44.331086  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.331098  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:44.331106  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:44.331199  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:44.371985  644218 cri.go:89] found id: ""
	I0210 14:03:44.372066  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.372089  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:44.372106  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:44.372193  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:44.418686  644218 cri.go:89] found id: ""
	I0210 14:03:44.418723  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.418735  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:44.418744  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:44.418823  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:44.463504  644218 cri.go:89] found id: ""
	I0210 14:03:44.463541  644218 logs.go:282] 0 containers: []
	W0210 14:03:44.463554  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:44.463567  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:44.463582  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:44.556359  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:44.556407  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:44.613264  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:44.613294  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:44.673306  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:44.673347  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:44.692946  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:44.692986  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:44.784696  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:47.285436  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:47.306476  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:47.306545  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:47.357266  644218 cri.go:89] found id: ""
	I0210 14:03:47.357303  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.357315  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:47.357325  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:47.357394  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:47.404748  644218 cri.go:89] found id: ""
	I0210 14:03:47.404787  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.404799  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:47.404807  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:47.404888  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:47.452403  644218 cri.go:89] found id: ""
	I0210 14:03:47.452435  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.452446  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:47.452455  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:47.452522  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:47.487817  644218 cri.go:89] found id: ""
	I0210 14:03:47.487863  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.487873  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:47.487881  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:47.487958  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:47.531980  644218 cri.go:89] found id: ""
	I0210 14:03:47.532016  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.532027  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:47.532039  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:47.532110  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:47.574452  644218 cri.go:89] found id: ""
	I0210 14:03:47.574483  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.574495  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:47.574504  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:47.574578  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:47.630118  644218 cri.go:89] found id: ""
	I0210 14:03:47.630156  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.630168  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:47.630176  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:47.630240  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:47.689520  644218 cri.go:89] found id: ""
	I0210 14:03:47.689548  644218 logs.go:282] 0 containers: []
	W0210 14:03:47.689558  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:47.689570  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:47.689585  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:47.776441  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:47.776534  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:47.795339  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:47.795377  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:47.893837  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:47.893931  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:47.893970  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:47.983418  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:47.983453  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:50.524432  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:50.543066  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:50.543162  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:50.581369  644218 cri.go:89] found id: ""
	I0210 14:03:50.581407  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.581420  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:50.581429  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:50.581500  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:50.616897  644218 cri.go:89] found id: ""
	I0210 14:03:50.616929  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.616942  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:50.616957  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:50.617035  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:50.654122  644218 cri.go:89] found id: ""
	I0210 14:03:50.654170  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.654183  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:50.654192  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:50.654264  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:50.690191  644218 cri.go:89] found id: ""
	I0210 14:03:50.690227  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.690238  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:50.690248  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:50.690310  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:50.731777  644218 cri.go:89] found id: ""
	I0210 14:03:50.731817  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.731833  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:50.731846  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:50.731931  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:50.770168  644218 cri.go:89] found id: ""
	I0210 14:03:50.770202  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.770215  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:50.770224  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:50.770295  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:50.806813  644218 cri.go:89] found id: ""
	I0210 14:03:50.806853  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.806865  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:50.806873  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:50.806947  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:50.848293  644218 cri.go:89] found id: ""
	I0210 14:03:50.848323  644218 logs.go:282] 0 containers: []
	W0210 14:03:50.848334  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:50.848346  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:50.848360  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:50.890863  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:50.890911  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:50.941547  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:50.941589  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:50.955439  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:50.955474  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:51.029545  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:51.029576  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:51.029597  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:53.610694  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:53.623978  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:53.624066  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:53.658282  644218 cri.go:89] found id: ""
	I0210 14:03:53.658315  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.658327  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:53.658336  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:53.658412  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:53.692065  644218 cri.go:89] found id: ""
	I0210 14:03:53.692094  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.692105  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:53.692113  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:53.692185  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:53.728359  644218 cri.go:89] found id: ""
	I0210 14:03:53.728402  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.728414  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:53.728426  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:53.728506  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:53.770834  644218 cri.go:89] found id: ""
	I0210 14:03:53.770867  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.770878  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:53.770885  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:53.771485  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:53.806380  644218 cri.go:89] found id: ""
	I0210 14:03:53.806427  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.806436  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:53.806444  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:53.806509  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:53.844373  644218 cri.go:89] found id: ""
	I0210 14:03:53.844411  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.844423  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:53.844433  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:53.844518  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:53.885072  644218 cri.go:89] found id: ""
	I0210 14:03:53.885106  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.885119  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:53.885127  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:53.885192  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:53.919681  644218 cri.go:89] found id: ""
	I0210 14:03:53.919714  644218 logs.go:282] 0 containers: []
	W0210 14:03:53.919726  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:53.919745  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:53.919828  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:53.934592  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:53.934627  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:54.016162  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:54.016195  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:54.016213  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:54.096979  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:54.097022  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:54.138788  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:54.138825  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:56.701868  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:56.721115  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:56.721186  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:56.769228  644218 cri.go:89] found id: ""
	I0210 14:03:56.769263  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.769284  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:56.769293  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:56.769360  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:56.805830  644218 cri.go:89] found id: ""
	I0210 14:03:56.805867  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.805878  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:56.805884  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:56.805950  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:03:56.846721  644218 cri.go:89] found id: ""
	I0210 14:03:56.846747  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.846756  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:03:56.846762  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:03:56.846811  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:03:56.883788  644218 cri.go:89] found id: ""
	I0210 14:03:56.883823  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.883837  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:03:56.883845  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:03:56.883908  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:03:56.923786  644218 cri.go:89] found id: ""
	I0210 14:03:56.923819  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.923831  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:03:56.923840  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:03:56.923908  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:03:56.967660  644218 cri.go:89] found id: ""
	I0210 14:03:56.967695  644218 logs.go:282] 0 containers: []
	W0210 14:03:56.967708  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:03:56.967717  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:03:56.967780  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:03:57.019223  644218 cri.go:89] found id: ""
	I0210 14:03:57.019244  644218 logs.go:282] 0 containers: []
	W0210 14:03:57.019254  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:03:57.019263  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:03:57.019316  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:03:57.067673  644218 cri.go:89] found id: ""
	I0210 14:03:57.067710  644218 logs.go:282] 0 containers: []
	W0210 14:03:57.067721  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:03:57.067736  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:03:57.067752  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:03:57.120032  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:03:57.120072  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:03:57.174191  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:03:57.174236  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:03:57.190555  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:03:57.190590  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:03:57.262085  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:03:57.262114  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:03:57.262131  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:03:59.849719  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:03:59.867524  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:03:59.867611  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:03:59.918387  644218 cri.go:89] found id: ""
	I0210 14:03:59.918415  644218 logs.go:282] 0 containers: []
	W0210 14:03:59.918423  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:03:59.918429  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:03:59.918480  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:03:59.972473  644218 cri.go:89] found id: ""
	I0210 14:03:59.972507  644218 logs.go:282] 0 containers: []
	W0210 14:03:59.972515  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:03:59.972522  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:03:59.972573  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:00.031171  644218 cri.go:89] found id: ""
	I0210 14:04:00.031202  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.031213  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:00.031219  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:00.031280  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:00.079499  644218 cri.go:89] found id: ""
	I0210 14:04:00.079529  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.079545  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:00.079553  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:00.079611  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:00.131775  644218 cri.go:89] found id: ""
	I0210 14:04:00.131809  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.131820  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:00.131829  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:00.131911  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:00.187361  644218 cri.go:89] found id: ""
	I0210 14:04:00.187392  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.187403  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:00.187412  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:00.187478  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:00.240739  644218 cri.go:89] found id: ""
	I0210 14:04:00.240772  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.240783  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:00.240791  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:00.240855  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:00.308021  644218 cri.go:89] found id: ""
	I0210 14:04:00.308055  644218 logs.go:282] 0 containers: []
	W0210 14:04:00.308066  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:00.308078  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:00.308095  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:00.378065  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:00.378110  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:00.466458  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:00.466506  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:00.494117  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:00.494146  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:00.611747  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:00.611774  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:00.611793  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:03.231769  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:03.247278  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:03.247450  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:03.316724  644218 cri.go:89] found id: ""
	I0210 14:04:03.316816  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.316837  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:03.316855  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:03.316968  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:03.369226  644218 cri.go:89] found id: ""
	I0210 14:04:03.369258  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.369270  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:03.369277  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:03.369354  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:03.416959  644218 cri.go:89] found id: ""
	I0210 14:04:03.416996  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.417009  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:03.417018  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:03.417095  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:03.473130  644218 cri.go:89] found id: ""
	I0210 14:04:03.473166  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.473178  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:03.473187  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:03.473259  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:03.514192  644218 cri.go:89] found id: ""
	I0210 14:04:03.514226  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.514237  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:03.514245  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:03.514313  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:03.563370  644218 cri.go:89] found id: ""
	I0210 14:04:03.563398  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.563406  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:03.563413  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:03.563479  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:03.603462  644218 cri.go:89] found id: ""
	I0210 14:04:03.603495  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.603507  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:03.603516  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:03.603587  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:03.643780  644218 cri.go:89] found id: ""
	I0210 14:04:03.643815  644218 logs.go:282] 0 containers: []
	W0210 14:04:03.643827  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:03.643841  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:03.643861  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:03.727878  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:03.727904  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:03.727923  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:03.813377  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:03.813425  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:03.865187  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:03.865228  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:03.922765  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:03.922812  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:06.440444  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:06.459384  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:06.459470  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:06.510574  644218 cri.go:89] found id: ""
	I0210 14:04:06.510611  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.510623  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:06.510632  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:06.510697  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:06.564168  644218 cri.go:89] found id: ""
	I0210 14:04:06.564198  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.564210  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:06.564219  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:06.564299  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:06.615473  644218 cri.go:89] found id: ""
	I0210 14:04:06.615566  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.615589  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:06.615607  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:06.615712  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:06.661221  644218 cri.go:89] found id: ""
	I0210 14:04:06.661254  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.661266  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:06.661282  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:06.661350  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:06.708951  644218 cri.go:89] found id: ""
	I0210 14:04:06.709052  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.709078  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:06.709095  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:06.709233  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:06.760008  644218 cri.go:89] found id: ""
	I0210 14:04:06.760036  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.760048  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:06.760056  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:06.760253  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:06.807902  644218 cri.go:89] found id: ""
	I0210 14:04:06.807991  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.808017  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:06.808041  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:06.808114  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:06.855703  644218 cri.go:89] found id: ""
	I0210 14:04:06.855796  644218 logs.go:282] 0 containers: []
	W0210 14:04:06.855820  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:06.855844  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:06.855881  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:06.911392  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:06.911430  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:06.996108  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:06.996161  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:07.014929  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:07.014969  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:07.122885  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:07.122989  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:07.123032  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:09.714763  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:09.733703  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:09.733894  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:09.785507  644218 cri.go:89] found id: ""
	I0210 14:04:09.785628  644218 logs.go:282] 0 containers: []
	W0210 14:04:09.785677  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:09.785704  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:09.785802  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:09.837066  644218 cri.go:89] found id: ""
	I0210 14:04:09.837180  644218 logs.go:282] 0 containers: []
	W0210 14:04:09.837208  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:09.837247  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:09.837355  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:09.893555  644218 cri.go:89] found id: ""
	I0210 14:04:09.893656  644218 logs.go:282] 0 containers: []
	W0210 14:04:09.893688  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:09.893717  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:09.893804  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:09.945110  644218 cri.go:89] found id: ""
	I0210 14:04:09.945224  644218 logs.go:282] 0 containers: []
	W0210 14:04:09.945255  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:09.945287  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:09.945381  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:09.995213  644218 cri.go:89] found id: ""
	I0210 14:04:09.995307  644218 logs.go:282] 0 containers: []
	W0210 14:04:09.995332  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:09.995352  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:09.995459  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:10.046622  644218 cri.go:89] found id: ""
	I0210 14:04:10.046718  644218 logs.go:282] 0 containers: []
	W0210 14:04:10.046742  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:10.046761  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:10.046882  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:10.097595  644218 cri.go:89] found id: ""
	I0210 14:04:10.097653  644218 logs.go:282] 0 containers: []
	W0210 14:04:10.097667  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:10.097676  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:10.097784  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:10.147332  644218 cri.go:89] found id: ""
	I0210 14:04:10.147367  644218 logs.go:282] 0 containers: []
	W0210 14:04:10.147379  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:10.147395  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:10.147413  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:10.222884  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:10.222949  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:10.244718  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:10.244775  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:10.331505  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:10.331541  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:10.331556  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:10.450242  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:10.450308  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:13.016070  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:13.028830  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:13.028905  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:13.066377  644218 cri.go:89] found id: ""
	I0210 14:04:13.066412  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.066420  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:13.066427  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:13.066493  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:13.102952  644218 cri.go:89] found id: ""
	I0210 14:04:13.102987  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.102998  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:13.103006  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:13.103074  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:13.137710  644218 cri.go:89] found id: ""
	I0210 14:04:13.137740  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.137748  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:13.137755  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:13.137810  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:13.173031  644218 cri.go:89] found id: ""
	I0210 14:04:13.173071  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.173083  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:13.173092  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:13.173167  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:13.212648  644218 cri.go:89] found id: ""
	I0210 14:04:13.212688  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.212700  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:13.212708  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:13.212775  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:13.246424  644218 cri.go:89] found id: ""
	I0210 14:04:13.246458  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.246470  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:13.246479  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:13.246546  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:13.280516  644218 cri.go:89] found id: ""
	I0210 14:04:13.280545  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.280553  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:13.280559  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:13.280628  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:13.316413  644218 cri.go:89] found id: ""
	I0210 14:04:13.316443  644218 logs.go:282] 0 containers: []
	W0210 14:04:13.316453  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:13.316473  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:13.316489  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:13.394157  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:13.394198  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:13.437841  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:13.437873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:13.490663  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:13.490699  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:13.504453  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:13.504483  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:13.597917  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:16.099604  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:16.116150  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:16.116229  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:16.161518  644218 cri.go:89] found id: ""
	I0210 14:04:16.161550  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.161569  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:16.161578  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:16.161648  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:16.208460  644218 cri.go:89] found id: ""
	I0210 14:04:16.208496  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.208509  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:16.208518  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:16.208593  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:16.253399  644218 cri.go:89] found id: ""
	I0210 14:04:16.253434  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.253446  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:16.253454  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:16.253524  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:16.294610  644218 cri.go:89] found id: ""
	I0210 14:04:16.294642  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.294655  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:16.294665  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:16.294730  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:16.331566  644218 cri.go:89] found id: ""
	I0210 14:04:16.331599  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.331612  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:16.331621  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:16.331695  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:16.371414  644218 cri.go:89] found id: ""
	I0210 14:04:16.371449  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.371461  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:16.371469  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:16.371539  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:16.412836  644218 cri.go:89] found id: ""
	I0210 14:04:16.412866  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.412874  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:16.412880  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:16.412942  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:16.447262  644218 cri.go:89] found id: ""
	I0210 14:04:16.447298  644218 logs.go:282] 0 containers: []
	W0210 14:04:16.447309  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:16.447324  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:16.447340  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:16.492916  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:16.492962  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:16.565647  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:16.565687  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:16.581257  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:16.581294  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:16.654054  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:16.654084  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:16.654100  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:19.236380  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:19.251004  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:19.251064  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:19.294712  644218 cri.go:89] found id: ""
	I0210 14:04:19.294737  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.294744  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:19.294751  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:19.294808  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:19.329463  644218 cri.go:89] found id: ""
	I0210 14:04:19.329497  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.329508  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:19.329517  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:19.329588  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:19.365465  644218 cri.go:89] found id: ""
	I0210 14:04:19.365488  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.365498  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:19.365506  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:19.365573  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:19.402358  644218 cri.go:89] found id: ""
	I0210 14:04:19.402390  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.402400  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:19.402407  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:19.402479  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:19.459652  644218 cri.go:89] found id: ""
	I0210 14:04:19.459685  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.459697  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:19.459707  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:19.459772  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:19.527146  644218 cri.go:89] found id: ""
	I0210 14:04:19.527181  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.527193  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:19.527202  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:19.527276  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:19.574422  644218 cri.go:89] found id: ""
	I0210 14:04:19.574456  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.574468  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:19.574477  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:19.574549  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:19.614870  644218 cri.go:89] found id: ""
	I0210 14:04:19.614908  644218 logs.go:282] 0 containers: []
	W0210 14:04:19.614920  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:19.614933  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:19.614955  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:19.705928  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:19.705964  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:19.761293  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:19.761324  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:19.826166  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:19.826210  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:19.844377  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:19.844420  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:19.917730  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:22.418787  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:22.432993  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:22.433062  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:22.471280  644218 cri.go:89] found id: ""
	I0210 14:04:22.471310  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.471318  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:22.471325  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:22.471386  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:22.513173  644218 cri.go:89] found id: ""
	I0210 14:04:22.513212  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.513225  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:22.513233  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:22.513292  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:22.550895  644218 cri.go:89] found id: ""
	I0210 14:04:22.550924  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.550933  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:22.550939  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:22.551019  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:22.589099  644218 cri.go:89] found id: ""
	I0210 14:04:22.589133  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.589144  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:22.589153  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:22.589222  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:22.625710  644218 cri.go:89] found id: ""
	I0210 14:04:22.625742  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.625755  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:22.625763  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:22.625837  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:22.660329  644218 cri.go:89] found id: ""
	I0210 14:04:22.660357  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.660365  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:22.660371  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:22.660423  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:22.698677  644218 cri.go:89] found id: ""
	I0210 14:04:22.698712  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.698723  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:22.698732  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:22.698799  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:22.737099  644218 cri.go:89] found id: ""
	I0210 14:04:22.737138  644218 logs.go:282] 0 containers: []
	W0210 14:04:22.737150  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:22.737164  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:22.737180  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:22.790120  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:22.790157  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:22.808718  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:22.808757  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:22.890820  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:22.890846  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:22.890865  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:22.981546  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:22.981594  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:25.523901  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:25.537869  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:25.537953  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:25.574818  644218 cri.go:89] found id: ""
	I0210 14:04:25.574850  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.574861  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:25.574870  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:25.574955  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:25.610778  644218 cri.go:89] found id: ""
	I0210 14:04:25.610815  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.610827  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:25.610836  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:25.610907  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:25.644360  644218 cri.go:89] found id: ""
	I0210 14:04:25.644388  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.644403  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:25.644409  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:25.644463  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:25.677880  644218 cri.go:89] found id: ""
	I0210 14:04:25.677909  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.677917  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:25.677924  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:25.677976  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:25.716128  644218 cri.go:89] found id: ""
	I0210 14:04:25.716153  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.716161  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:25.716168  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:25.716229  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:25.752602  644218 cri.go:89] found id: ""
	I0210 14:04:25.752630  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.752637  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:25.752643  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:25.752711  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:25.787312  644218 cri.go:89] found id: ""
	I0210 14:04:25.787346  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.787358  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:25.787367  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:25.787431  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:25.824270  644218 cri.go:89] found id: ""
	I0210 14:04:25.824316  644218 logs.go:282] 0 containers: []
	W0210 14:04:25.824327  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:25.824341  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:25.824354  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:25.878657  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:25.878707  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:25.894780  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:25.894817  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:25.972554  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:25.972580  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:25.972594  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:26.055852  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:26.055890  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:28.599298  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:28.616763  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:28.616834  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:28.668138  644218 cri.go:89] found id: ""
	I0210 14:04:28.668172  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.668185  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:28.668200  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:28.668297  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:28.717611  644218 cri.go:89] found id: ""
	I0210 14:04:28.717650  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.717662  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:28.717670  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:28.717745  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:28.766310  644218 cri.go:89] found id: ""
	I0210 14:04:28.766343  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.766354  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:28.766361  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:28.766423  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:28.805030  644218 cri.go:89] found id: ""
	I0210 14:04:28.805054  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.805064  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:28.805072  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:28.805133  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:28.844923  644218 cri.go:89] found id: ""
	I0210 14:04:28.844964  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.844975  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:28.844982  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:28.845035  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:28.889081  644218 cri.go:89] found id: ""
	I0210 14:04:28.889122  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.889135  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:28.889143  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:28.889223  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:28.933656  644218 cri.go:89] found id: ""
	I0210 14:04:28.933706  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.933720  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:28.933728  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:28.933795  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:28.977695  644218 cri.go:89] found id: ""
	I0210 14:04:28.977729  644218 logs.go:282] 0 containers: []
	W0210 14:04:28.977741  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:28.977756  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:28.977774  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:29.034636  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:29.034672  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:29.103677  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:29.103715  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:29.121208  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:29.121250  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:29.205799  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:29.205826  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:29.205843  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:31.820430  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:31.838968  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:31.839053  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:31.886810  644218 cri.go:89] found id: ""
	I0210 14:04:31.886848  644218 logs.go:282] 0 containers: []
	W0210 14:04:31.886858  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:31.886872  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:31.886951  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:31.926138  644218 cri.go:89] found id: ""
	I0210 14:04:31.926171  644218 logs.go:282] 0 containers: []
	W0210 14:04:31.926180  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:31.926186  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:31.926241  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:31.979843  644218 cri.go:89] found id: ""
	I0210 14:04:31.979880  644218 logs.go:282] 0 containers: []
	W0210 14:04:31.979892  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:31.979903  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:31.979994  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:32.025011  644218 cri.go:89] found id: ""
	I0210 14:04:32.025043  644218 logs.go:282] 0 containers: []
	W0210 14:04:32.025051  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:32.025066  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:32.025130  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:32.067220  644218 cri.go:89] found id: ""
	I0210 14:04:32.067259  644218 logs.go:282] 0 containers: []
	W0210 14:04:32.067272  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:32.067287  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:32.067370  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:32.115186  644218 cri.go:89] found id: ""
	I0210 14:04:32.115220  644218 logs.go:282] 0 containers: []
	W0210 14:04:32.115234  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:32.115242  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:32.115316  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:32.160490  644218 cri.go:89] found id: ""
	I0210 14:04:32.160527  644218 logs.go:282] 0 containers: []
	W0210 14:04:32.160542  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:32.160551  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:32.160622  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:32.200768  644218 cri.go:89] found id: ""
	I0210 14:04:32.200803  644218 logs.go:282] 0 containers: []
	W0210 14:04:32.200812  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:32.200822  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:32.200846  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:32.214903  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:32.214935  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:32.288979  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:32.288999  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:32.289014  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:32.368508  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:32.368557  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:32.415445  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:32.415475  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:34.982794  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:35.000340  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:35.000411  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:35.039533  644218 cri.go:89] found id: ""
	I0210 14:04:35.039577  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.039596  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:35.039605  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:35.039677  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:35.076945  644218 cri.go:89] found id: ""
	I0210 14:04:35.076976  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.076987  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:35.076995  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:35.077062  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:35.120766  644218 cri.go:89] found id: ""
	I0210 14:04:35.120799  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.120811  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:35.120819  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:35.120885  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:35.163060  644218 cri.go:89] found id: ""
	I0210 14:04:35.163098  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.163110  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:35.163118  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:35.163181  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:35.207599  644218 cri.go:89] found id: ""
	I0210 14:04:35.207634  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.207646  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:35.207655  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:35.207727  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:35.248012  644218 cri.go:89] found id: ""
	I0210 14:04:35.248052  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.248063  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:35.248072  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:35.248143  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:35.290511  644218 cri.go:89] found id: ""
	I0210 14:04:35.290546  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.290558  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:35.290566  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:35.290635  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:35.334379  644218 cri.go:89] found id: ""
	I0210 14:04:35.334406  644218 logs.go:282] 0 containers: []
	W0210 14:04:35.334414  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:35.334425  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:35.334441  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:35.389876  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:35.389916  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:35.403689  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:35.403719  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:35.478418  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:35.478440  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:35.478454  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:35.554002  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:35.554037  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:38.094279  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:38.110337  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:38.110414  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:38.150470  644218 cri.go:89] found id: ""
	I0210 14:04:38.150512  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.150525  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:38.150534  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:38.150609  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:38.198042  644218 cri.go:89] found id: ""
	I0210 14:04:38.198147  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.198164  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:38.198182  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:38.198283  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:38.247361  644218 cri.go:89] found id: ""
	I0210 14:04:38.247404  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.247416  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:38.247425  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:38.247496  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:38.290059  644218 cri.go:89] found id: ""
	I0210 14:04:38.290093  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.290105  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:38.290114  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:38.290191  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:38.336971  644218 cri.go:89] found id: ""
	I0210 14:04:38.337007  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.337019  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:38.337027  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:38.337107  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:38.382583  644218 cri.go:89] found id: ""
	I0210 14:04:38.382629  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.382641  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:38.382650  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:38.382734  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:38.422815  644218 cri.go:89] found id: ""
	I0210 14:04:38.422855  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.422867  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:38.422874  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:38.422974  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:38.458136  644218 cri.go:89] found id: ""
	I0210 14:04:38.458173  644218 logs.go:282] 0 containers: []
	W0210 14:04:38.458185  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:38.458198  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:38.458213  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:38.525711  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:38.525752  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:38.543916  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:38.543953  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:38.619804  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:38.619828  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:38.619845  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:38.697422  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:38.697464  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:41.238057  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:41.251408  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:41.251472  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:41.283956  644218 cri.go:89] found id: ""
	I0210 14:04:41.284006  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.284018  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:41.284028  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:41.284102  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:41.317647  644218 cri.go:89] found id: ""
	I0210 14:04:41.317677  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.317690  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:41.317699  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:41.317767  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:41.352954  644218 cri.go:89] found id: ""
	I0210 14:04:41.352994  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.353006  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:41.353014  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:41.353082  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:41.391054  644218 cri.go:89] found id: ""
	I0210 14:04:41.391083  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.391092  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:41.391099  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:41.391168  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:41.432442  644218 cri.go:89] found id: ""
	I0210 14:04:41.432477  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.432488  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:41.432496  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:41.432567  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:41.468068  644218 cri.go:89] found id: ""
	I0210 14:04:41.468097  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.468105  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:41.468111  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:41.468181  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:41.503320  644218 cri.go:89] found id: ""
	I0210 14:04:41.503357  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.503370  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:41.503380  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:41.503448  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:41.551038  644218 cri.go:89] found id: ""
	I0210 14:04:41.551072  644218 logs.go:282] 0 containers: []
	W0210 14:04:41.551084  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:41.551097  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:41.551114  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:41.639852  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:41.639905  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:41.683635  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:41.683667  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:41.736986  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:41.737042  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:41.753736  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:41.753767  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:41.829351  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:44.330475  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:44.345112  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:44.345202  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:44.384608  644218 cri.go:89] found id: ""
	I0210 14:04:44.384642  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.384653  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:44.384662  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:44.384833  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:44.422135  644218 cri.go:89] found id: ""
	I0210 14:04:44.422173  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.422185  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:44.422194  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:44.422273  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:44.455721  644218 cri.go:89] found id: ""
	I0210 14:04:44.455749  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.455757  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:44.455763  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:44.455816  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:44.488925  644218 cri.go:89] found id: ""
	I0210 14:04:44.488958  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.488970  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:44.488978  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:44.489037  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:44.523205  644218 cri.go:89] found id: ""
	I0210 14:04:44.523241  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.523253  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:44.523263  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:44.523334  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:44.563353  644218 cri.go:89] found id: ""
	I0210 14:04:44.563381  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.563391  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:44.563397  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:44.563464  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:44.603538  644218 cri.go:89] found id: ""
	I0210 14:04:44.603576  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.603587  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:44.603595  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:44.603668  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:44.643042  644218 cri.go:89] found id: ""
	I0210 14:04:44.643078  644218 logs.go:282] 0 containers: []
	W0210 14:04:44.643091  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:44.643104  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:44.643120  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:44.695075  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:44.695116  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:44.709594  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:44.709624  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:44.777764  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:44.777791  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:44.777807  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:44.866485  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:44.866529  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:47.417593  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:47.430343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:47.430421  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:47.471825  644218 cri.go:89] found id: ""
	I0210 14:04:47.471866  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.471874  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:47.471880  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:47.471946  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:47.508681  644218 cri.go:89] found id: ""
	I0210 14:04:47.508719  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.508739  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:47.508748  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:47.508821  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:47.546153  644218 cri.go:89] found id: ""
	I0210 14:04:47.546181  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.546193  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:47.546203  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:47.546283  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:47.586659  644218 cri.go:89] found id: ""
	I0210 14:04:47.586693  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.586702  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:47.586710  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:47.586777  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:47.624689  644218 cri.go:89] found id: ""
	I0210 14:04:47.624719  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.624728  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:47.624734  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:47.624798  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:47.667862  644218 cri.go:89] found id: ""
	I0210 14:04:47.667893  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.667905  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:47.667914  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:47.667981  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:47.701647  644218 cri.go:89] found id: ""
	I0210 14:04:47.701682  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.701693  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:47.701703  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:47.701772  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:47.734988  644218 cri.go:89] found id: ""
	I0210 14:04:47.735022  644218 logs.go:282] 0 containers: []
	W0210 14:04:47.735031  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:47.735041  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:47.735066  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:47.814508  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:47.814547  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:47.857920  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:47.857960  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:47.919939  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:47.919975  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:47.936259  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:47.936294  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:48.013055  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:50.513279  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:50.527871  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:50.527946  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:50.565998  644218 cri.go:89] found id: ""
	I0210 14:04:50.566026  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.566035  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:50.566043  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:50.566110  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:50.605596  644218 cri.go:89] found id: ""
	I0210 14:04:50.605622  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.605630  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:50.605636  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:50.605696  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:50.645939  644218 cri.go:89] found id: ""
	I0210 14:04:50.645968  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.645980  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:50.645987  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:50.646054  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:50.681990  644218 cri.go:89] found id: ""
	I0210 14:04:50.682024  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.682036  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:50.682045  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:50.682111  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:50.724472  644218 cri.go:89] found id: ""
	I0210 14:04:50.724505  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.724516  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:50.724525  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:50.724592  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:50.766077  644218 cri.go:89] found id: ""
	I0210 14:04:50.766107  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.766118  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:50.766127  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:50.766194  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:50.805833  644218 cri.go:89] found id: ""
	I0210 14:04:50.805872  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.805886  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:50.805895  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:50.805978  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:50.843215  644218 cri.go:89] found id: ""
	I0210 14:04:50.843254  644218 logs.go:282] 0 containers: []
	W0210 14:04:50.843266  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:50.843282  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:50.843298  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:50.909742  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:50.909775  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:50.909793  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:50.992712  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:50.992756  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:51.040691  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:51.040731  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:51.090910  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:51.090952  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:53.606118  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:53.619182  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:53.619255  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:53.654301  644218 cri.go:89] found id: ""
	I0210 14:04:53.654347  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.654360  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:53.654370  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:53.654441  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:53.689696  644218 cri.go:89] found id: ""
	I0210 14:04:53.689727  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.689738  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:53.689745  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:53.689815  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:53.724877  644218 cri.go:89] found id: ""
	I0210 14:04:53.724910  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.724920  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:53.724937  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:53.725004  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:53.761008  644218 cri.go:89] found id: ""
	I0210 14:04:53.761034  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.761042  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:53.761048  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:53.761099  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:53.795913  644218 cri.go:89] found id: ""
	I0210 14:04:53.795940  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.795949  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:53.795955  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:53.796005  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:53.834012  644218 cri.go:89] found id: ""
	I0210 14:04:53.834040  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.834047  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:53.834053  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:53.834105  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:53.874002  644218 cri.go:89] found id: ""
	I0210 14:04:53.874029  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.874037  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:53.874044  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:53.874093  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:53.907172  644218 cri.go:89] found id: ""
	I0210 14:04:53.907197  644218 logs.go:282] 0 containers: []
	W0210 14:04:53.907205  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:53.907218  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:53.907239  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:53.920613  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:53.920639  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:53.989040  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:53.989062  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:53.989077  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:54.061102  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:54.061144  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:54.101296  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:54.101329  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:56.651290  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:56.664411  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:56.664477  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:56.699097  644218 cri.go:89] found id: ""
	I0210 14:04:56.699139  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.699153  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:56.699162  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:56.699257  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:56.731817  644218 cri.go:89] found id: ""
	I0210 14:04:56.731861  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.731872  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:56.731880  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:56.731955  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:56.763524  644218 cri.go:89] found id: ""
	I0210 14:04:56.763558  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.763567  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:56.763573  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:56.763630  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:56.797514  644218 cri.go:89] found id: ""
	I0210 14:04:56.797553  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.797566  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:56.797575  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:56.797636  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:56.834669  644218 cri.go:89] found id: ""
	I0210 14:04:56.834705  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.834716  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:56.834724  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:56.834798  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:56.868080  644218 cri.go:89] found id: ""
	I0210 14:04:56.868112  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.868121  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:56.868128  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:56.868187  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:56.906821  644218 cri.go:89] found id: ""
	I0210 14:04:56.906852  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.906862  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:56.906871  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:56.906935  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:56.954389  644218 cri.go:89] found id: ""
	I0210 14:04:56.954417  644218 logs.go:282] 0 containers: []
	W0210 14:04:56.954427  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:56.954440  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:04:56.954455  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:04:57.032384  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:04:57.032423  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:04:57.076374  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:57.076405  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:04:57.123784  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:04:57.123815  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:04:57.137783  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:04:57.137808  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:04:57.205160  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:04:59.705729  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:04:59.718907  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:04:59.718983  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:04:59.751949  644218 cri.go:89] found id: ""
	I0210 14:04:59.751979  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.751988  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:04:59.751994  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:04:59.752054  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:04:59.783810  644218 cri.go:89] found id: ""
	I0210 14:04:59.783842  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.783850  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:04:59.783856  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:04:59.783911  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:04:59.817057  644218 cri.go:89] found id: ""
	I0210 14:04:59.817090  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.817099  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:04:59.817105  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:04:59.817168  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:04:59.849726  644218 cri.go:89] found id: ""
	I0210 14:04:59.849752  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.849761  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:04:59.849767  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:04:59.849829  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:04:59.884435  644218 cri.go:89] found id: ""
	I0210 14:04:59.884471  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.884483  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:04:59.884492  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:04:59.884566  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:04:59.917969  644218 cri.go:89] found id: ""
	I0210 14:04:59.918009  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.918021  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:04:59.918029  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:04:59.918107  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:04:59.950432  644218 cri.go:89] found id: ""
	I0210 14:04:59.950466  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.950476  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:04:59.950484  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:04:59.950554  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:04:59.985652  644218 cri.go:89] found id: ""
	I0210 14:04:59.985690  644218 logs.go:282] 0 containers: []
	W0210 14:04:59.985703  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:04:59.985716  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:04:59.985729  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:05:00.034008  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:05:00.034044  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:05:00.047619  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:05:00.047645  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:05:00.116509  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:05:00.116532  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:05:00.116560  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:05:00.195803  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:05:00.195843  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:05:02.735238  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:05:02.748733  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:05:02.748797  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:05:02.796328  644218 cri.go:89] found id: ""
	I0210 14:05:02.796356  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.796368  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:05:02.796384  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:05:02.796449  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:05:02.835795  644218 cri.go:89] found id: ""
	I0210 14:05:02.835832  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.835845  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:05:02.835854  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:05:02.835931  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:05:02.873781  644218 cri.go:89] found id: ""
	I0210 14:05:02.873817  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.873830  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:05:02.873839  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:05:02.873901  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:05:02.910258  644218 cri.go:89] found id: ""
	I0210 14:05:02.910282  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.910290  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:05:02.910296  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:05:02.910346  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:05:02.947860  644218 cri.go:89] found id: ""
	I0210 14:05:02.947886  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.947901  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:05:02.947915  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:05:02.947978  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:05:02.983813  644218 cri.go:89] found id: ""
	I0210 14:05:02.983840  644218 logs.go:282] 0 containers: []
	W0210 14:05:02.983848  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:05:02.983855  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:05:02.983925  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:05:03.022895  644218 cri.go:89] found id: ""
	I0210 14:05:03.022931  644218 logs.go:282] 0 containers: []
	W0210 14:05:03.022951  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:05:03.022960  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:05:03.023025  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:05:03.058954  644218 cri.go:89] found id: ""
	I0210 14:05:03.058990  644218 logs.go:282] 0 containers: []
	W0210 14:05:03.059001  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:05:03.059016  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:05:03.059034  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:05:03.108836  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:05:03.108876  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:05:03.123154  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:05:03.123197  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:05:03.194024  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:05:03.194053  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:05:03.194072  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:05:03.277898  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:05:03.277936  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:05:05.843018  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:05:05.860977  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:05:05.861057  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:05:05.899618  644218 cri.go:89] found id: ""
	I0210 14:05:05.899651  644218 logs.go:282] 0 containers: []
	W0210 14:05:05.899661  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:05:05.899670  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:05:05.899736  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:05:05.931778  644218 cri.go:89] found id: ""
	I0210 14:05:05.931813  644218 logs.go:282] 0 containers: []
	W0210 14:05:05.931824  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:05:05.931832  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:05:05.931907  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:05:05.965743  644218 cri.go:89] found id: ""
	I0210 14:05:05.965768  644218 logs.go:282] 0 containers: []
	W0210 14:05:05.965776  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:05:05.965789  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:05:05.965840  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:05:05.999808  644218 cri.go:89] found id: ""
	I0210 14:05:05.999836  644218 logs.go:282] 0 containers: []
	W0210 14:05:05.999844  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:05:05.999851  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:05:05.999930  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:05:06.038904  644218 cri.go:89] found id: ""
	I0210 14:05:06.038935  644218 logs.go:282] 0 containers: []
	W0210 14:05:06.038946  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:05:06.038954  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:05:06.039032  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:05:06.079151  644218 cri.go:89] found id: ""
	I0210 14:05:06.079185  644218 logs.go:282] 0 containers: []
	W0210 14:05:06.079196  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:05:06.079205  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:05:06.079284  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:05:06.112237  644218 cri.go:89] found id: ""
	I0210 14:05:06.112271  644218 logs.go:282] 0 containers: []
	W0210 14:05:06.112295  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:05:06.112305  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:05:06.112372  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:05:06.145187  644218 cri.go:89] found id: ""
	I0210 14:05:06.145228  644218 logs.go:282] 0 containers: []
	W0210 14:05:06.145242  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:05:06.145255  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:05:06.145271  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:05:06.199929  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:05:06.199976  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:05:06.214211  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:05:06.214244  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:05:06.284320  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:05:06.284348  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:05:06.284367  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:05:06.373668  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:05:06.373720  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:05:08.914861  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:05:08.928047  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:05:08.928116  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:05:08.961811  644218 cri.go:89] found id: ""
	I0210 14:05:08.961841  644218 logs.go:282] 0 containers: []
	W0210 14:05:08.961850  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:05:08.961856  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:05:08.961909  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:05:08.996154  644218 cri.go:89] found id: ""
	I0210 14:05:08.996187  644218 logs.go:282] 0 containers: []
	W0210 14:05:08.996205  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:05:08.996213  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:05:08.996308  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:05:09.030621  644218 cri.go:89] found id: ""
	I0210 14:05:09.030669  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.030682  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:05:09.030692  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:05:09.030765  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:05:09.069590  644218 cri.go:89] found id: ""
	I0210 14:05:09.069622  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.069630  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:05:09.069637  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:05:09.069701  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:05:09.120574  644218 cri.go:89] found id: ""
	I0210 14:05:09.120601  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.120610  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:05:09.120616  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:05:09.120672  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:05:09.176626  644218 cri.go:89] found id: ""
	I0210 14:05:09.176662  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.176675  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:05:09.176684  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:05:09.176755  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:05:09.223695  644218 cri.go:89] found id: ""
	I0210 14:05:09.223724  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.223732  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:05:09.223738  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:05:09.223791  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:05:09.258884  644218 cri.go:89] found id: ""
	I0210 14:05:09.258919  644218 logs.go:282] 0 containers: []
	W0210 14:05:09.258931  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:05:09.258944  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:05:09.258960  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 14:05:09.298127  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:05:09.298165  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:05:09.349713  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:05:09.349755  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:05:09.365484  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:05:09.365517  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:05:09.432369  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:05:09.432402  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:05:09.432415  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:05:12.010716  644218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:05:12.024436  644218 kubeadm.go:597] duration metric: took 4m2.486109068s to restartPrimaryControlPlane
	W0210 14:05:12.024533  644218 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 14:05:12.024569  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:05:14.252674  644218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.228078s)
	I0210 14:05:14.252759  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:05:14.272856  644218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:05:14.288400  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:05:14.303434  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:05:14.303465  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:05:14.303525  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:05:14.315038  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:05:14.315122  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:05:14.326412  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:05:14.337317  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:05:14.337378  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:05:14.348608  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:05:14.358965  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:05:14.359045  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:05:14.369819  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:05:14.383996  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:05:14.384069  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:05:14.398474  644218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:05:14.477129  644218 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 14:05:14.477256  644218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:05:14.637430  644218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:05:14.637598  644218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:05:14.637733  644218 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 14:05:14.842602  644218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:05:14.845045  644218 out.go:235]   - Generating certificates and keys ...
	I0210 14:05:14.845167  644218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:05:14.845248  644218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:05:14.845377  644218 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:05:14.845482  644218 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:05:14.845577  644218 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:05:14.845657  644218 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:05:14.845747  644218 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:05:14.845837  644218 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:05:14.845952  644218 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:05:14.846421  644218 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:05:14.846486  644218 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:05:14.846578  644218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:05:15.117929  644218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:05:15.458066  644218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:05:15.542714  644218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:05:15.726195  644218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:05:15.741746  644218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:05:15.744371  644218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:05:15.744588  644218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:05:15.898375  644218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:05:15.900333  644218 out.go:235]   - Booting up control plane ...
	I0210 14:05:15.900443  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:05:15.901118  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:05:15.903480  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:05:15.904634  644218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:05:15.912079  644218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 14:05:55.914031  644218 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 14:05:55.914724  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:05:55.914936  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:00.915708  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:00.915926  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:10.916482  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:10.916688  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:30.917338  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:30.917550  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:07:10.919140  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:07:10.919450  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:07:10.919470  644218 kubeadm.go:310] 
	I0210 14:07:10.919531  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:07:10.919612  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:07:10.919643  644218 kubeadm.go:310] 
	I0210 14:07:10.919696  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:07:10.919740  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:07:10.919898  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:07:10.919908  644218 kubeadm.go:310] 
	I0210 14:07:10.920052  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:07:10.920108  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:07:10.920160  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:07:10.920171  644218 kubeadm.go:310] 
	I0210 14:07:10.920344  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:07:10.920471  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:07:10.920487  644218 kubeadm.go:310] 
	I0210 14:07:10.920637  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:07:10.920748  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:07:10.920852  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:07:10.920956  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:07:10.920968  644218 kubeadm.go:310] 
	I0210 14:07:10.921451  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:10.921558  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:07:10.921647  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 14:07:10.921820  644218 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 14:07:10.921873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:07:11.388800  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:07:11.404434  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:07:11.415583  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:07:11.415609  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:07:11.415668  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:07:11.425343  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:07:11.425411  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:07:11.435126  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:07:11.444951  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:07:11.445016  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:07:11.454675  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.463839  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:07:11.463923  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.473621  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:07:11.482802  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:07:11.482864  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:07:11.492269  644218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:07:11.706383  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:09:07.694951  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:09:07.695080  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 14:09:07.696680  644218 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 14:09:07.696776  644218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:09:07.696928  644218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:09:07.697091  644218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:09:07.697242  644218 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 14:09:07.697319  644218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:09:07.698867  644218 out.go:235]   - Generating certificates and keys ...
	I0210 14:09:07.698960  644218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:09:07.699052  644218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:09:07.699176  644218 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:09:07.699261  644218 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:09:07.699354  644218 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:09:07.699403  644218 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:09:07.699465  644218 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:09:07.699527  644218 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:09:07.699633  644218 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:09:07.699731  644218 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:09:07.699800  644218 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:09:07.699884  644218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:09:07.699960  644218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:09:07.700047  644218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:09:07.700138  644218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:09:07.700209  644218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:09:07.700322  644218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:09:07.700393  644218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:09:07.700436  644218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:09:07.700526  644218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:09:07.701917  644218 out.go:235]   - Booting up control plane ...
	I0210 14:09:07.702014  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:09:07.702107  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:09:07.702184  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:09:07.702300  644218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:09:07.702455  644218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 14:09:07.702532  644218 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 14:09:07.702626  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.702845  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.702940  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703134  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703216  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703373  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703435  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703588  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703650  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703819  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703826  644218 kubeadm.go:310] 
	I0210 14:09:07.703859  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:09:07.703893  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:09:07.703900  644218 kubeadm.go:310] 
	I0210 14:09:07.703933  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:09:07.703994  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:09:07.704123  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:09:07.704131  644218 kubeadm.go:310] 
	I0210 14:09:07.704298  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:09:07.704355  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:09:07.704403  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:09:07.704413  644218 kubeadm.go:310] 
	I0210 14:09:07.704552  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:09:07.704673  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:09:07.704685  644218 kubeadm.go:310] 
	I0210 14:09:07.704841  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:09:07.704960  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:09:07.705074  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:09:07.705199  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:09:07.705210  644218 kubeadm.go:310] 
	I0210 14:09:07.705291  644218 kubeadm.go:394] duration metric: took 7m58.218613622s to StartCluster
	I0210 14:09:07.705343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:09:07.705405  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:09:07.750026  644218 cri.go:89] found id: ""
	I0210 14:09:07.750054  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.750063  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:09:07.750070  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:09:07.750136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:09:07.793341  644218 cri.go:89] found id: ""
	I0210 14:09:07.793374  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.793386  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:09:07.793395  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:09:07.793455  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:09:07.835496  644218 cri.go:89] found id: ""
	I0210 14:09:07.835521  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.835538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:09:07.835543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:09:07.835620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:09:07.869619  644218 cri.go:89] found id: ""
	I0210 14:09:07.869655  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.869663  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:09:07.869669  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:09:07.869735  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:09:07.927211  644218 cri.go:89] found id: ""
	I0210 14:09:07.927243  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.927253  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:09:07.927261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:09:07.927331  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:09:07.966320  644218 cri.go:89] found id: ""
	I0210 14:09:07.966355  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.966365  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:09:07.966374  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:09:07.966437  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:09:07.999268  644218 cri.go:89] found id: ""
	I0210 14:09:07.999302  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.999313  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:09:07.999321  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:09:07.999389  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:09:08.039339  644218 cri.go:89] found id: ""
	I0210 14:09:08.039371  644218 logs.go:282] 0 containers: []
	W0210 14:09:08.039380  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:09:08.039391  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:09:08.039404  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:09:08.091644  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:09:08.091675  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:09:08.105318  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:09:08.105346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:09:08.182104  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:09:08.182127  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:09:08.182140  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:09:08.287929  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:09:08.287974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 14:09:08.331764  644218 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 14:09:08.331884  644218 out.go:270] * 
	* 
	W0210 14:09:08.332053  644218 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.332079  644218 out.go:270] * 
	* 
	W0210 14:09:08.333029  644218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 14:09:08.336162  644218 out.go:201] 
	W0210 14:09:08.337200  644218 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.337269  644218 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 14:09:08.337316  644218 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 14:09:08.339083  644218 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-643105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (231.579876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-643105 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-963165 image list                          | embed-certs-963165           | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:03 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-963165                                  | embed-certs-963165           | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:03 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-963165                                  | embed-certs-963165           | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:03 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-963165                                  | embed-certs-963165           | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:03 UTC |
	| delete  | -p embed-certs-963165                                  | embed-certs-963165           | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:03 UTC |
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-264648 image list                           | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-372614 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | disable-driver-mounts-372614                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991097  | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:06 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-187291             | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-187291                  | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-187291 image list                           | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| addons  | enable dashboard -p default-k8s-diff-port-991097       | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC | 10 Feb 25 14:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC |                     |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 14:06:17
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 14:06:17.243747  647891 out.go:345] Setting OutFile to fd 1 ...
	I0210 14:06:17.244049  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244060  647891 out.go:358] Setting ErrFile to fd 2...
	I0210 14:06:17.244065  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244273  647891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 14:06:17.244886  647891 out.go:352] Setting JSON to false
	I0210 14:06:17.245898  647891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13722,"bootTime":1739182655,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 14:06:17.246027  647891 start.go:139] virtualization: kvm guest
	I0210 14:06:17.248712  647891 out.go:177] * [default-k8s-diff-port-991097] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 14:06:17.249739  647891 notify.go:220] Checking for updates...
	I0210 14:06:17.249783  647891 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 14:06:17.250816  647891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 14:06:17.251974  647891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:17.252995  647891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 14:06:17.254055  647891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 14:06:17.255160  647891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 14:06:17.256646  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:17.257053  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.257103  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.272251  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0210 14:06:17.272688  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.273235  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.273265  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.273611  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.273803  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.274066  647891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 14:06:17.274374  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.274410  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.289090  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0210 14:06:17.289485  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.289921  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.289940  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.290252  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.290429  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.324494  647891 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 14:06:17.325653  647891 start.go:297] selected driver: kvm2
	I0210 14:06:17.325667  647891 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.325821  647891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 14:06:17.326767  647891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.326863  647891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 14:06:17.341811  647891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 14:06:17.342243  647891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:06:17.342292  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:17.342352  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:17.342403  647891 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.342546  647891 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.344647  647891 out.go:177] * Starting "default-k8s-diff-port-991097" primary control-plane node in "default-k8s-diff-port-991097" cluster
	I0210 14:06:17.345834  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:17.345863  647891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 14:06:17.345881  647891 cache.go:56] Caching tarball of preloaded images
	I0210 14:06:17.345970  647891 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 14:06:17.345985  647891 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 14:06:17.346082  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:17.346270  647891 start.go:360] acquireMachinesLock for default-k8s-diff-port-991097: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 14:06:17.346312  647891 start.go:364] duration metric: took 22.484µs to acquireMachinesLock for "default-k8s-diff-port-991097"
	I0210 14:06:17.346326  647891 start.go:96] Skipping create...Using existing machine configuration
	I0210 14:06:17.346396  647891 fix.go:54] fixHost starting: 
	I0210 14:06:17.346671  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.346702  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.362026  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0210 14:06:17.362460  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.362937  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.362960  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.363308  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.363509  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.363660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:06:17.365186  647891 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991097: state=Stopped err=<nil>
	I0210 14:06:17.365227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	W0210 14:06:17.365370  647891 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 14:06:17.367081  647891 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991097" ...
	I0210 14:06:17.368184  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Start
	I0210 14:06:17.368392  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) starting domain...
	I0210 14:06:17.368412  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) ensuring networks are active...
	I0210 14:06:17.369033  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network default is active
	I0210 14:06:17.369340  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network mk-default-k8s-diff-port-991097 is active
	I0210 14:06:17.369654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) getting domain XML...
	I0210 14:06:17.370420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) creating domain...
	I0210 14:06:18.584048  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for IP...
	I0210 14:06:18.584938  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585440  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585547  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.585443  647926 retry.go:31] will retry after 284.933629ms: waiting for domain to come up
	I0210 14:06:18.872073  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872628  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.872603  647926 retry.go:31] will retry after 252.055679ms: waiting for domain to come up
	I0210 14:06:19.125837  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126311  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126344  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.126282  647926 retry.go:31] will retry after 411.979825ms: waiting for domain to come up
	I0210 14:06:19.540074  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.540586  647926 retry.go:31] will retry after 404.768184ms: waiting for domain to come up
	I0210 14:06:19.947166  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947685  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947741  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.947665  647926 retry.go:31] will retry after 556.378156ms: waiting for domain to come up
	I0210 14:06:20.505361  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505826  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:20.505784  647926 retry.go:31] will retry after 866.999674ms: waiting for domain to come up
	I0210 14:06:21.374890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375452  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375483  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:21.375399  647926 retry.go:31] will retry after 773.54598ms: waiting for domain to come up
	I0210 14:06:22.150227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150649  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:22.150606  647926 retry.go:31] will retry after 1.159257258s: waiting for domain to come up
	I0210 14:06:23.311620  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:23.312136  647926 retry.go:31] will retry after 1.322774288s: waiting for domain to come up
	I0210 14:06:24.636617  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637106  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:24.637035  647926 retry.go:31] will retry after 1.698355707s: waiting for domain to come up
	I0210 14:06:26.337653  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338239  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338269  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:26.338193  647926 retry.go:31] will retry after 2.301675582s: waiting for domain to come up
	I0210 14:06:30.917338  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:30.917550  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:28.642137  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642701  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642735  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:28.642637  647926 retry.go:31] will retry after 3.42557087s: waiting for domain to come up
	I0210 14:06:32.072208  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:32.072653  647926 retry.go:31] will retry after 4.016224279s: waiting for domain to come up
	I0210 14:06:36.093333  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has current primary IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093891  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) found domain IP: 192.168.39.38
	I0210 14:06:36.093900  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserving static IP address...
	I0210 14:06:36.094346  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.094400  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | skip adding static IP to network mk-default-k8s-diff-port-991097 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"}
	I0210 14:06:36.094419  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserved static IP address 192.168.39.38 for domain default-k8s-diff-port-991097
	I0210 14:06:36.094435  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for SSH...
	I0210 14:06:36.094449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Getting to WaitForSSH function...
	I0210 14:06:36.096338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096691  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.096731  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096845  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH client type: external
	I0210 14:06:36.096888  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa (-rw-------)
	I0210 14:06:36.096933  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 14:06:36.096951  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | About to run SSH command:
	I0210 14:06:36.096961  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | exit 0
	I0210 14:06:36.224595  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | SSH cmd err, output: <nil>: 
	I0210 14:06:36.224941  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetConfigRaw
	I0210 14:06:36.225577  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.228100  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228466  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.228488  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228753  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:36.228952  647891 machine.go:93] provisionDockerMachine start ...
	I0210 14:06:36.228976  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:36.229205  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.231380  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231680  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.231715  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231796  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.232000  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232158  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232320  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.232502  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.232716  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.232730  647891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 14:06:36.348884  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 14:06:36.348927  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349191  647891 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991097"
	I0210 14:06:36.349222  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.352262  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352630  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.352660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.353039  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.353529  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.353760  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.353774  647891 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991097 && echo "default-k8s-diff-port-991097" | sudo tee /etc/hostname
	I0210 14:06:36.482721  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991097
	
	I0210 14:06:36.482754  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.485405  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485793  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.485839  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485972  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.486202  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486369  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486526  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.486705  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.486883  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.486900  647891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991097/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 14:06:36.609135  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 14:06:36.609166  647891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 14:06:36.609210  647891 buildroot.go:174] setting up certificates
	I0210 14:06:36.609221  647891 provision.go:84] configureAuth start
	I0210 14:06:36.609232  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.609479  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.612210  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612560  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.612587  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612688  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.614722  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615063  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.615108  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615271  647891 provision.go:143] copyHostCerts
	I0210 14:06:36.615343  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 14:06:36.615358  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 14:06:36.615420  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 14:06:36.615522  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 14:06:36.615530  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 14:06:36.615553  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 14:06:36.615617  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 14:06:36.615624  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 14:06:36.615645  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 14:06:36.615712  647891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991097 san=[127.0.0.1 192.168.39.38 default-k8s-diff-port-991097 localhost minikube]
	I0210 14:06:36.700551  647891 provision.go:177] copyRemoteCerts
	I0210 14:06:36.700630  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 14:06:36.700660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.703231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703510  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.703552  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703684  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.703854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.704015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.704123  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:36.791354  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 14:06:36.815844  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 14:06:36.839837  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0210 14:06:36.864605  647891 provision.go:87] duration metric: took 255.365505ms to configureAuth
	I0210 14:06:36.864653  647891 buildroot.go:189] setting minikube options for container-runtime
	I0210 14:06:36.864900  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:36.864986  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.867500  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.867819  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.867843  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.868078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.868301  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868445  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868556  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.868671  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.868837  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.868851  647891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 14:06:37.117664  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 14:06:37.117702  647891 machine.go:96] duration metric: took 888.734538ms to provisionDockerMachine
	I0210 14:06:37.117738  647891 start.go:293] postStartSetup for "default-k8s-diff-port-991097" (driver="kvm2")
	I0210 14:06:37.117752  647891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 14:06:37.117780  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.118146  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 14:06:37.118185  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.121015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121387  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.121420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.121877  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.122038  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.122167  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.212791  647891 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 14:06:37.217377  647891 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 14:06:37.217399  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 14:06:37.217455  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 14:06:37.217531  647891 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 14:06:37.217617  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 14:06:37.229155  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:37.256944  647891 start.go:296] duration metric: took 139.188892ms for postStartSetup
	I0210 14:06:37.256995  647891 fix.go:56] duration metric: took 19.910598766s for fixHost
	I0210 14:06:37.257019  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.259761  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260061  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.260095  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260309  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.260516  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260716  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260828  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.261003  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:37.261211  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:37.261223  647891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 14:06:37.373077  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739196397.346971659
	
	I0210 14:06:37.373102  647891 fix.go:216] guest clock: 1739196397.346971659
	I0210 14:06:37.373109  647891 fix.go:229] Guest: 2025-02-10 14:06:37.346971659 +0000 UTC Remote: 2025-02-10 14:06:37.256999277 +0000 UTC m=+20.051538196 (delta=89.972382ms)
	I0210 14:06:37.373144  647891 fix.go:200] guest clock delta is within tolerance: 89.972382ms
	I0210 14:06:37.373150  647891 start.go:83] releasing machines lock for "default-k8s-diff-port-991097", held for 20.026829951s
	I0210 14:06:37.373175  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.373444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:37.376107  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376494  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.376541  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377209  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377404  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377534  647891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 14:06:37.377589  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.377646  647891 ssh_runner.go:195] Run: cat /version.json
	I0210 14:06:37.377676  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.380159  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380557  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380597  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380714  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.380818  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.380991  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.381076  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381150  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.381210  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.381236  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381376  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.461615  647891 ssh_runner.go:195] Run: systemctl --version
	I0210 14:06:37.484185  647891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 14:06:37.626066  647891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 14:06:37.632178  647891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 14:06:37.632269  647891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 14:06:37.649096  647891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 14:06:37.649125  647891 start.go:495] detecting cgroup driver to use...
	I0210 14:06:37.649207  647891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 14:06:37.666251  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 14:06:37.680465  647891 docker.go:217] disabling cri-docker service (if available) ...
	I0210 14:06:37.680513  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 14:06:37.694090  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 14:06:37.707550  647891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 14:06:37.831118  647891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 14:06:37.980607  647891 docker.go:233] disabling docker service ...
	I0210 14:06:37.980676  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 14:06:37.995113  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 14:06:38.009358  647891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 14:06:38.140399  647891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 14:06:38.254033  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 14:06:38.267735  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 14:06:38.286239  647891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 14:06:38.286326  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.296619  647891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 14:06:38.296675  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.306712  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.316772  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.326918  647891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 14:06:38.337280  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.347440  647891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.364350  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.374474  647891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 14:06:38.383773  647891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 14:06:38.383822  647891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 14:06:38.397731  647891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 14:06:38.407296  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:38.518444  647891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 14:06:38.609821  647891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 14:06:38.609897  647891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 14:06:38.614975  647891 start.go:563] Will wait 60s for crictl version
	I0210 14:06:38.615032  647891 ssh_runner.go:195] Run: which crictl
	I0210 14:06:38.618907  647891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 14:06:38.666752  647891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 14:06:38.666843  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.695436  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.724290  647891 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 14:06:38.725705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:38.728442  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728769  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:38.728804  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728997  647891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 14:06:38.733358  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:38.746088  647891 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991
097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 14:06:38.746232  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:38.746279  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:38.785698  647891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 14:06:38.785767  647891 ssh_runner.go:195] Run: which lz4
	I0210 14:06:38.790230  647891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 14:06:38.794584  647891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 14:06:38.794612  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 14:06:40.165093  647891 crio.go:462] duration metric: took 1.374905922s to copy over tarball
	I0210 14:06:40.165182  647891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 14:06:42.267000  647891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10178421s)
	I0210 14:06:42.267031  647891 crio.go:469] duration metric: took 2.101903432s to extract the tarball
	I0210 14:06:42.267039  647891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 14:06:42.304364  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:42.347839  647891 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 14:06:42.347867  647891 cache_images.go:84] Images are preloaded, skipping loading
	I0210 14:06:42.347877  647891 kubeadm.go:934] updating node { 192.168.39.38 8444 v1.32.1 crio true true} ...
	I0210 14:06:42.347999  647891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 14:06:42.348081  647891 ssh_runner.go:195] Run: crio config
	I0210 14:06:42.392127  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:42.392155  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:42.392168  647891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 14:06:42.392205  647891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991097 NodeName:default-k8s-diff-port-991097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 14:06:42.392445  647891 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991097"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.38"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 14:06:42.392531  647891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 14:06:42.402790  647891 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 14:06:42.402866  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 14:06:42.412691  647891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0210 14:06:42.430227  647891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 14:06:42.447018  647891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0210 14:06:42.463855  647891 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0210 14:06:42.467830  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:42.479887  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:42.616347  647891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:06:42.633982  647891 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097 for IP: 192.168.39.38
	I0210 14:06:42.634012  647891 certs.go:194] generating shared ca certs ...
	I0210 14:06:42.634036  647891 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:42.634251  647891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 14:06:42.634325  647891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 14:06:42.634339  647891 certs.go:256] generating profile certs ...
	I0210 14:06:42.634464  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.key
	I0210 14:06:42.634547  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key.653a5b77
	I0210 14:06:42.634633  647891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key
	I0210 14:06:42.634756  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 14:06:42.634790  647891 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 14:06:42.634804  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 14:06:42.634842  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 14:06:42.634877  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 14:06:42.634931  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 14:06:42.634990  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:42.635813  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 14:06:42.683471  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 14:06:42.717348  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 14:06:42.753582  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 14:06:42.786140  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0210 14:06:42.826849  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 14:06:42.854467  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 14:06:42.880065  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 14:06:42.907119  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 14:06:42.930542  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 14:06:42.953922  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 14:06:42.976830  647891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 14:06:42.993090  647891 ssh_runner.go:195] Run: openssl version
	I0210 14:06:42.999059  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 14:06:43.010187  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014640  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014690  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.020392  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 14:06:43.031108  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 14:06:43.041766  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046208  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046242  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.051895  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 14:06:43.062587  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 14:06:43.073217  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077547  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077594  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.083004  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 14:06:43.093687  647891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 14:06:43.098273  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 14:06:43.103884  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 14:06:43.109468  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 14:06:43.114957  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 14:06:43.120594  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 14:06:43.126311  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 14:06:43.132094  647891 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:43.132170  647891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 14:06:43.132205  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.170719  647891 cri.go:89] found id: ""
	I0210 14:06:43.170794  647891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 14:06:43.181310  647891 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 14:06:43.181333  647891 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 14:06:43.181378  647891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 14:06:43.191081  647891 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 14:06:43.191662  647891 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-991097" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:43.191931  647891 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-991097" cluster setting kubeconfig missing "default-k8s-diff-port-991097" context setting]
	I0210 14:06:43.192424  647891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:43.193695  647891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 14:06:43.203483  647891 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0210 14:06:43.203510  647891 kubeadm.go:1160] stopping kube-system containers ...
	I0210 14:06:43.203522  647891 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 14:06:43.203565  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.248106  647891 cri.go:89] found id: ""
	I0210 14:06:43.248168  647891 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 14:06:43.264683  647891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:06:43.274810  647891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:06:43.274837  647891 kubeadm.go:157] found existing configuration files:
	
	I0210 14:06:43.274893  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0210 14:06:43.284346  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:06:43.284394  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:06:43.294116  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0210 14:06:43.303692  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:06:43.303743  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:06:43.313293  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.322835  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:06:43.322893  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.332538  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0210 14:06:43.341968  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:06:43.342030  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:06:43.351997  647891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:06:43.361911  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:43.471810  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.088121  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.292411  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.357453  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.447107  647891 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:06:44.447198  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:44.947672  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.447925  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.947638  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.447630  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.504665  647891 api_server.go:72] duration metric: took 2.057554604s to wait for apiserver process to appear ...
	I0210 14:06:46.504702  647891 api_server.go:88] waiting for apiserver healthz status ...
	I0210 14:06:46.504729  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:46.505324  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:06:47.005003  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:52.009445  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:52.009499  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:57.013020  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:57.013089  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:02.016406  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:02.016462  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005061  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": read tcp 192.168.39.1:37208->192.168.39.38:8444: read: connection reset by peer
	I0210 14:07:07.005127  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005704  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:10.919140  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:07:10.919450  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:07:10.919470  644218 kubeadm.go:310] 
	I0210 14:07:10.919531  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:07:10.919612  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:07:10.919643  644218 kubeadm.go:310] 
	I0210 14:07:10.919696  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:07:10.919740  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:07:10.919898  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:07:10.919908  644218 kubeadm.go:310] 
	I0210 14:07:10.920052  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:07:10.920108  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:07:10.920160  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:07:10.920171  644218 kubeadm.go:310] 
	I0210 14:07:10.920344  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:07:10.920471  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:07:10.920487  644218 kubeadm.go:310] 
	I0210 14:07:10.920637  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:07:10.920748  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:07:10.920852  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:07:10.920956  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:07:10.920968  644218 kubeadm.go:310] 
	I0210 14:07:10.921451  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:10.921558  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:07:10.921647  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 14:07:10.921820  644218 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 14:07:10.921873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:07:11.388800  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:07:11.404434  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:07:11.415583  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:07:11.415609  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:07:11.415668  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:07:11.425343  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:07:11.425411  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:07:11.435126  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:07:11.444951  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:07:11.445016  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:07:11.454675  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.463839  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:07:11.463923  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.473621  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:07:11.482802  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:07:11.482864  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:07:11.492269  644218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:07:11.706383  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:07.505081  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.505697  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:08.005039  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:13.005418  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:13.005503  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:18.006035  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:18.006088  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:23.006412  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:23.006480  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:24.990987  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:24.991022  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:24.991041  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.094135  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.094175  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.094195  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.134411  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.134448  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.505023  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.510502  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:25.510542  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.004985  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.016527  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:26.016561  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.505209  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.512830  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 200:
	ok
	I0210 14:07:26.519490  647891 api_server.go:141] control plane version: v1.32.1
	I0210 14:07:26.519519  647891 api_server.go:131] duration metric: took 40.01480806s to wait for apiserver health ...
	I0210 14:07:26.519531  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:07:26.519541  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:07:26.521665  647891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 14:07:26.523188  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 14:07:26.534793  647891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 14:07:26.555242  647891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 14:07:26.560044  647891 system_pods.go:59] 8 kube-system pods found
	I0210 14:07:26.560088  647891 system_pods.go:61] "coredns-668d6bf9bc-chvvk" [81bc9af8-1dbc-4299-9818-c5e28cd527a4] Running
	I0210 14:07:26.560096  647891 system_pods.go:61] "etcd-default-k8s-diff-port-991097" [d7991f48-f3f9-4585-9d42-8ac10fb95d65] Running
	I0210 14:07:26.560105  647891 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991097" [91a8d2ac-4127-4e49-a21e-95babe7078b1] Running
	I0210 14:07:26.560113  647891 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991097" [12fbb1be-d90f-47b2-a6e6-5d541e1c9cd3] Running
	I0210 14:07:26.560128  647891 system_pods.go:61] "kube-proxy-k94kp" [82230795-ec36-4619-a8bd-6b1520b2dcce] Running
	I0210 14:07:26.560133  647891 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991097" [98c775ff-82f9-42b1-a3ba-4a2d1830f6fc] Running
	I0210 14:07:26.560139  647891 system_pods.go:61] "metrics-server-f79f97bbb-j7gwv" [20814b8f-e1ca-4d3e-baa2-83fa85d5055e] Pending
	I0210 14:07:26.560144  647891 system_pods.go:61] "storage-provisioner" [f31ad609-ca85-4fbb-9fa7-b0fd93d6b504] Running
	I0210 14:07:26.560152  647891 system_pods.go:74] duration metric: took 4.884117ms to wait for pod list to return data ...
	I0210 14:07:26.560166  647891 node_conditions.go:102] verifying NodePressure condition ...
	I0210 14:07:26.563732  647891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 14:07:26.563765  647891 node_conditions.go:123] node cpu capacity is 2
	I0210 14:07:26.563783  647891 node_conditions.go:105] duration metric: took 3.607402ms to run NodePressure ...
	I0210 14:07:26.563811  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:07:26.839281  647891 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 14:07:26.842184  647891 retry.go:31] will retry after 267.442504ms: kubelet not initialised
	I0210 14:07:27.114654  647891 retry.go:31] will retry after 460.309798ms: kubelet not initialised
	I0210 14:07:27.580487  647891 retry.go:31] will retry after 468.648016ms: kubelet not initialised
	I0210 14:07:28.052957  647891 retry.go:31] will retry after 634.581788ms: kubelet not initialised
	I0210 14:07:28.692193  647891 retry.go:31] will retry after 1.585469768s: kubelet not initialised
	I0210 14:07:30.280814  647891 retry.go:31] will retry after 1.746270708s: kubelet not initialised
	I0210 14:07:32.035943  647891 kubeadm.go:739] kubelet initialised
	I0210 14:07:32.035970  647891 kubeadm.go:740] duration metric: took 5.19665458s waiting for restarted kubelet to initialise ...
	I0210 14:07:32.035982  647891 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:07:32.039938  647891 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045939  647891 pod_ready.go:93] pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.045973  647891 pod_ready.go:82] duration metric: took 2.006006864s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045988  647891 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049855  647891 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.049879  647891 pod_ready.go:82] duration metric: took 3.881494ms for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049892  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053608  647891 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.053629  647891 pod_ready.go:82] duration metric: took 3.729266ms for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053642  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:36.060444  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:38.560369  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:41.059206  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:43.059645  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:44.559464  647891 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.559497  647891 pod_ready.go:82] duration metric: took 10.505846034s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.559509  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563350  647891 pod_ready.go:93] pod "kube-proxy-k94kp" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.563377  647891 pod_ready.go:82] duration metric: took 3.859986ms for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563391  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567231  647891 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.567251  647891 pod_ready.go:82] duration metric: took 3.851395ms for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567263  647891 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:46.573010  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:49.073487  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:51.075217  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:53.573637  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:56.072364  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:58.073033  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:00.074357  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:02.574325  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:05.074157  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:07.074228  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:09.572654  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:11.573678  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:14.071655  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:16.072359  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:18.074418  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:20.572441  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:22.573381  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:25.073116  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:27.571988  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:29.573021  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:32.072192  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:34.073218  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:36.073606  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:38.573206  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:41.073455  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:43.572727  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:45.573114  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:48.072635  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:50.072982  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:52.572772  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:55.072938  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:57.073602  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:59.572429  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:01.572682  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:03.572760  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:06.073768  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:07.694951  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:09:07.695080  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 14:09:07.696680  644218 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 14:09:07.696776  644218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:09:07.696928  644218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:09:07.697091  644218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:09:07.697242  644218 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 14:09:07.697319  644218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:09:07.698867  644218 out.go:235]   - Generating certificates and keys ...
	I0210 14:09:07.698960  644218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:09:07.699052  644218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:09:07.699176  644218 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:09:07.699261  644218 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:09:07.699354  644218 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:09:07.699403  644218 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:09:07.699465  644218 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:09:07.699527  644218 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:09:07.699633  644218 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:09:07.699731  644218 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:09:07.699800  644218 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:09:07.699884  644218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:09:07.699960  644218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:09:07.700047  644218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:09:07.700138  644218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:09:07.700209  644218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:09:07.700322  644218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:09:07.700393  644218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:09:07.700436  644218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:09:07.700526  644218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:09:07.701917  644218 out.go:235]   - Booting up control plane ...
	I0210 14:09:07.702014  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:09:07.702107  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:09:07.702184  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:09:07.702300  644218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:09:07.702455  644218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 14:09:07.702532  644218 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 14:09:07.702626  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.702845  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.702940  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703134  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703216  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703373  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703435  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703588  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703650  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703819  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703826  644218 kubeadm.go:310] 
	I0210 14:09:07.703859  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:09:07.703893  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:09:07.703900  644218 kubeadm.go:310] 
	I0210 14:09:07.703933  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:09:07.703994  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:09:07.704123  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:09:07.704131  644218 kubeadm.go:310] 
	I0210 14:09:07.704298  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:09:07.704355  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:09:07.704403  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:09:07.704413  644218 kubeadm.go:310] 
	I0210 14:09:07.704552  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:09:07.704673  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:09:07.704685  644218 kubeadm.go:310] 
	I0210 14:09:07.704841  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:09:07.704960  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:09:07.705074  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:09:07.705199  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:09:07.705210  644218 kubeadm.go:310] 
	I0210 14:09:07.705291  644218 kubeadm.go:394] duration metric: took 7m58.218613622s to StartCluster
	I0210 14:09:07.705343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:09:07.705405  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:09:07.750026  644218 cri.go:89] found id: ""
	I0210 14:09:07.750054  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.750063  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:09:07.750070  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:09:07.750136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:09:07.793341  644218 cri.go:89] found id: ""
	I0210 14:09:07.793374  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.793386  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:09:07.793395  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:09:07.793455  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:09:07.835496  644218 cri.go:89] found id: ""
	I0210 14:09:07.835521  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.835538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:09:07.835543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:09:07.835620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:09:07.869619  644218 cri.go:89] found id: ""
	I0210 14:09:07.869655  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.869663  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:09:07.869669  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:09:07.869735  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:09:07.927211  644218 cri.go:89] found id: ""
	I0210 14:09:07.927243  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.927253  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:09:07.927261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:09:07.927331  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:09:07.966320  644218 cri.go:89] found id: ""
	I0210 14:09:07.966355  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.966365  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:09:07.966374  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:09:07.966437  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:09:07.999268  644218 cri.go:89] found id: ""
	I0210 14:09:07.999302  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.999313  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:09:07.999321  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:09:07.999389  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:09:08.039339  644218 cri.go:89] found id: ""
	I0210 14:09:08.039371  644218 logs.go:282] 0 containers: []
	W0210 14:09:08.039380  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:09:08.039391  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:09:08.039404  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:09:08.091644  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:09:08.091675  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:09:08.105318  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:09:08.105346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:09:08.182104  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:09:08.182127  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:09:08.182140  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:09:08.287929  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:09:08.287974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 14:09:08.331764  644218 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 14:09:08.331884  644218 out.go:270] * 
	W0210 14:09:08.332053  644218 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.332079  644218 out.go:270] * 
	W0210 14:09:08.333029  644218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 14:09:08.336162  644218 out.go:201] 
	W0210 14:09:08.337200  644218 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.337269  644218 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 14:09:08.337316  644218 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 14:09:08.339083  644218 out.go:201] 
	
	
	==> CRI-O <==
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.305762980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196549305741918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bfc7259-1a0c-4b32-8df4-5e1ddd1e496f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.306385235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f22ab8f-0643-4d95-9b82-7c8885e18dea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.306454082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f22ab8f-0643-4d95-9b82-7c8885e18dea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.306486268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4f22ab8f-0643-4d95-9b82-7c8885e18dea name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.337927980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12247ff1-9717-4280-aec4-8098be2adaae name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.338000557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12247ff1-9717-4280-aec4-8098be2adaae name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.338948952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=596e0c5a-8870-459e-adee-5d0eb2db9d0d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.339298720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196549339278868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=596e0c5a-8870-459e-adee-5d0eb2db9d0d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.339785411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a08cc165-baef-4950-932a-4b3df1071505 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.339839298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a08cc165-baef-4950-932a-4b3df1071505 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.339870451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a08cc165-baef-4950-932a-4b3df1071505 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.375094101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=147edd19-1800-4dad-b39f-d34cc7a770c9 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.375170672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=147edd19-1800-4dad-b39f-d34cc7a770c9 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.376595228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7df346d1-f34f-4a0d-b033-a66d4be43e05 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.376936828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196549376916430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7df346d1-f34f-4a0d-b033-a66d4be43e05 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.377595331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=838b33ea-dbd7-4e2a-bf77-fe5be05a3886 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.377665744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=838b33ea-dbd7-4e2a-bf77-fe5be05a3886 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.377699907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=838b33ea-dbd7-4e2a-bf77-fe5be05a3886 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.411974805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d0fdc1a-d108-44ee-8ef1-ce92488abff3 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.412050527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d0fdc1a-d108-44ee-8ef1-ce92488abff3 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.413583837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e5b9d57-a6b4-4329-9d4b-14ebcca3d7ff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.413934606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739196549413914819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e5b9d57-a6b4-4329-9d4b-14ebcca3d7ff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.414461185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e76a1cb7-6892-457a-b3ce-af48ac150ae2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.414574476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e76a1cb7-6892-457a-b3ce-af48ac150ae2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:09:09 old-k8s-version-643105 crio[627]: time="2025-02-10 14:09:09.414610078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e76a1cb7-6892-457a-b3ce-af48ac150ae2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 14:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053008] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041885] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.089243] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.827394] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420700] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 14:01] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.115834] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.165556] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132361] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.253937] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.572683] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.063864] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.037525] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +14.230154] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 14:05] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Feb10 14:07] systemd-fstab-generator[5274]: Ignoring "noauto" option for root device
	[  +0.063648] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:09:09 up 8 min,  0 users,  load average: 0.07, 0.13, 0.08
	Linux old-k8s-version-643105 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000864540)
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: goroutine 145 [syscall]:
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: syscall.Syscall6(0xe8, 0xe, 0xc000a0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000a0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0007121e0, 0x0, 0x0, 0x0)
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0009ff040)
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Feb 10 14:09:07 old-k8s-version-643105 kubelet[5451]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Feb 10 14:09:07 old-k8s-version-643105 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 14:09:07 old-k8s-version-643105 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 14:09:08 old-k8s-version-643105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 10 14:09:08 old-k8s-version-643105 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 14:09:08 old-k8s-version-643105 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 14:09:08 old-k8s-version-643105 kubelet[5517]: I0210 14:09:08.667042    5517 server.go:416] Version: v1.20.0
	Feb 10 14:09:08 old-k8s-version-643105 kubelet[5517]: I0210 14:09:08.669737    5517 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 14:09:08 old-k8s-version-643105 kubelet[5517]: I0210 14:09:08.677949    5517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 14:09:08 old-k8s-version-643105 kubelet[5517]: W0210 14:09:08.679435    5517 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 10 14:09:08 old-k8s-version-643105 kubelet[5517]: I0210 14:09:08.679643    5517 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (230.449105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-643105" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:09:11.736471  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:09:35.564973  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:09:47.442459  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:10:33.642836  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:10:43.815935  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:11:18.942749  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:11:27.875137  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:11:55.578755  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:12:06.880946  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:12:13.584775  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:12:42.010517  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:12:44.328361  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:13:16.656737  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:00.165485  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:07.393793  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:34.780519  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:34.786889  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:34.798278  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:34.819724  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:34.861142  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:34.943312  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:35.104929  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:35.426714  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:14:35.565352  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:36.068028  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:37.349586  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:39.723701  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:39.911312  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:45.032961  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:47.442567  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:14:55.274503  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:15.755822  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:16.664109  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:23.229936  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:33.642981  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:43.816054  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:56.717720  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:15:58.630979  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:16:10.506544  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:16:18.943110  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:16:27.874411  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:17:13.584978  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:17:18.639904  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:17:44.328321  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (230.242223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-643105" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (219.108477ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-643105 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-264648 image list                           | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-372614 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | disable-driver-mounts-372614                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991097  | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:06 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-187291             | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-187291                  | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-187291 image list                           | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| addons  | enable dashboard -p default-k8s-diff-port-991097       | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC | 10 Feb 25 14:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-991097                           | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 14:06:17
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 14:06:17.243747  647891 out.go:345] Setting OutFile to fd 1 ...
	I0210 14:06:17.244049  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244060  647891 out.go:358] Setting ErrFile to fd 2...
	I0210 14:06:17.244065  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244273  647891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 14:06:17.244886  647891 out.go:352] Setting JSON to false
	I0210 14:06:17.245898  647891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13722,"bootTime":1739182655,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 14:06:17.246027  647891 start.go:139] virtualization: kvm guest
	I0210 14:06:17.248712  647891 out.go:177] * [default-k8s-diff-port-991097] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 14:06:17.249739  647891 notify.go:220] Checking for updates...
	I0210 14:06:17.249783  647891 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 14:06:17.250816  647891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 14:06:17.251974  647891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:17.252995  647891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 14:06:17.254055  647891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 14:06:17.255160  647891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 14:06:17.256646  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:17.257053  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.257103  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.272251  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0210 14:06:17.272688  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.273235  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.273265  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.273611  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.273803  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.274066  647891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 14:06:17.274374  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.274410  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.289090  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0210 14:06:17.289485  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.289921  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.289940  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.290252  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.290429  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.324494  647891 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 14:06:17.325653  647891 start.go:297] selected driver: kvm2
	I0210 14:06:17.325667  647891 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.325821  647891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 14:06:17.326767  647891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.326863  647891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 14:06:17.341811  647891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 14:06:17.342243  647891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:06:17.342292  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:17.342352  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:17.342403  647891 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.342546  647891 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.344647  647891 out.go:177] * Starting "default-k8s-diff-port-991097" primary control-plane node in "default-k8s-diff-port-991097" cluster
	I0210 14:06:17.345834  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:17.345863  647891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 14:06:17.345881  647891 cache.go:56] Caching tarball of preloaded images
	I0210 14:06:17.345970  647891 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 14:06:17.345985  647891 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 14:06:17.346082  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:17.346270  647891 start.go:360] acquireMachinesLock for default-k8s-diff-port-991097: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 14:06:17.346312  647891 start.go:364] duration metric: took 22.484µs to acquireMachinesLock for "default-k8s-diff-port-991097"
	I0210 14:06:17.346326  647891 start.go:96] Skipping create...Using existing machine configuration
	I0210 14:06:17.346396  647891 fix.go:54] fixHost starting: 
	I0210 14:06:17.346671  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.346702  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.362026  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0210 14:06:17.362460  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.362937  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.362960  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.363308  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.363509  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.363660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:06:17.365186  647891 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991097: state=Stopped err=<nil>
	I0210 14:06:17.365227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	W0210 14:06:17.365370  647891 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 14:06:17.367081  647891 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991097" ...
	I0210 14:06:17.368184  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Start
	I0210 14:06:17.368392  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) starting domain...
	I0210 14:06:17.368412  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) ensuring networks are active...
	I0210 14:06:17.369033  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network default is active
	I0210 14:06:17.369340  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network mk-default-k8s-diff-port-991097 is active
	I0210 14:06:17.369654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) getting domain XML...
	I0210 14:06:17.370420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) creating domain...
	I0210 14:06:18.584048  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for IP...
	I0210 14:06:18.584938  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585440  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585547  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.585443  647926 retry.go:31] will retry after 284.933629ms: waiting for domain to come up
	I0210 14:06:18.872073  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872628  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.872603  647926 retry.go:31] will retry after 252.055679ms: waiting for domain to come up
	I0210 14:06:19.125837  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126311  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126344  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.126282  647926 retry.go:31] will retry after 411.979825ms: waiting for domain to come up
	I0210 14:06:19.540074  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.540586  647926 retry.go:31] will retry after 404.768184ms: waiting for domain to come up
	I0210 14:06:19.947166  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947685  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947741  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.947665  647926 retry.go:31] will retry after 556.378156ms: waiting for domain to come up
	I0210 14:06:20.505361  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505826  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:20.505784  647926 retry.go:31] will retry after 866.999674ms: waiting for domain to come up
	I0210 14:06:21.374890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375452  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375483  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:21.375399  647926 retry.go:31] will retry after 773.54598ms: waiting for domain to come up
	I0210 14:06:22.150227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150649  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:22.150606  647926 retry.go:31] will retry after 1.159257258s: waiting for domain to come up
	I0210 14:06:23.311620  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:23.312136  647926 retry.go:31] will retry after 1.322774288s: waiting for domain to come up
	I0210 14:06:24.636617  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637106  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:24.637035  647926 retry.go:31] will retry after 1.698355707s: waiting for domain to come up
	I0210 14:06:26.337653  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338239  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338269  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:26.338193  647926 retry.go:31] will retry after 2.301675582s: waiting for domain to come up
	I0210 14:06:30.917338  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:30.917550  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:28.642137  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642701  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642735  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:28.642637  647926 retry.go:31] will retry after 3.42557087s: waiting for domain to come up
	I0210 14:06:32.072208  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:32.072653  647926 retry.go:31] will retry after 4.016224279s: waiting for domain to come up
	I0210 14:06:36.093333  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has current primary IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093891  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) found domain IP: 192.168.39.38
	I0210 14:06:36.093900  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserving static IP address...
	I0210 14:06:36.094346  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.094400  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | skip adding static IP to network mk-default-k8s-diff-port-991097 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"}
	I0210 14:06:36.094419  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserved static IP address 192.168.39.38 for domain default-k8s-diff-port-991097
	I0210 14:06:36.094435  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for SSH...
	I0210 14:06:36.094449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Getting to WaitForSSH function...
	I0210 14:06:36.096338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096691  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.096731  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096845  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH client type: external
	I0210 14:06:36.096888  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa (-rw-------)
	I0210 14:06:36.096933  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 14:06:36.096951  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | About to run SSH command:
	I0210 14:06:36.096961  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | exit 0
	I0210 14:06:36.224595  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | SSH cmd err, output: <nil>: 
	I0210 14:06:36.224941  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetConfigRaw
	I0210 14:06:36.225577  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.228100  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228466  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.228488  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228753  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:36.228952  647891 machine.go:93] provisionDockerMachine start ...
	I0210 14:06:36.228976  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:36.229205  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.231380  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231680  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.231715  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231796  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.232000  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232158  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232320  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.232502  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.232716  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.232730  647891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 14:06:36.348884  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 14:06:36.348927  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349191  647891 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991097"
	I0210 14:06:36.349222  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.352262  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352630  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.352660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.353039  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.353529  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.353760  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.353774  647891 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991097 && echo "default-k8s-diff-port-991097" | sudo tee /etc/hostname
	I0210 14:06:36.482721  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991097
	
	I0210 14:06:36.482754  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.485405  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485793  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.485839  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485972  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.486202  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486369  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486526  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.486705  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.486883  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.486900  647891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991097/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 14:06:36.609135  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 14:06:36.609166  647891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 14:06:36.609210  647891 buildroot.go:174] setting up certificates
	I0210 14:06:36.609221  647891 provision.go:84] configureAuth start
	I0210 14:06:36.609232  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.609479  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.612210  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612560  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.612587  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612688  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.614722  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615063  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.615108  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615271  647891 provision.go:143] copyHostCerts
	I0210 14:06:36.615343  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 14:06:36.615358  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 14:06:36.615420  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 14:06:36.615522  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 14:06:36.615530  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 14:06:36.615553  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 14:06:36.615617  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 14:06:36.615624  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 14:06:36.615645  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 14:06:36.615712  647891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991097 san=[127.0.0.1 192.168.39.38 default-k8s-diff-port-991097 localhost minikube]
	I0210 14:06:36.700551  647891 provision.go:177] copyRemoteCerts
	I0210 14:06:36.700630  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 14:06:36.700660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.703231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703510  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.703552  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703684  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.703854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.704015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.704123  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:36.791354  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 14:06:36.815844  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 14:06:36.839837  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0210 14:06:36.864605  647891 provision.go:87] duration metric: took 255.365505ms to configureAuth
	I0210 14:06:36.864653  647891 buildroot.go:189] setting minikube options for container-runtime
	I0210 14:06:36.864900  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:36.864986  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.867500  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.867819  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.867843  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.868078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.868301  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868445  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868556  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.868671  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.868837  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.868851  647891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 14:06:37.117664  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 14:06:37.117702  647891 machine.go:96] duration metric: took 888.734538ms to provisionDockerMachine
	I0210 14:06:37.117738  647891 start.go:293] postStartSetup for "default-k8s-diff-port-991097" (driver="kvm2")
	I0210 14:06:37.117752  647891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 14:06:37.117780  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.118146  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 14:06:37.118185  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.121015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121387  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.121420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.121877  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.122038  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.122167  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.212791  647891 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 14:06:37.217377  647891 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 14:06:37.217399  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 14:06:37.217455  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 14:06:37.217531  647891 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 14:06:37.217617  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 14:06:37.229155  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:37.256944  647891 start.go:296] duration metric: took 139.188892ms for postStartSetup
	I0210 14:06:37.256995  647891 fix.go:56] duration metric: took 19.910598766s for fixHost
	I0210 14:06:37.257019  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.259761  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260061  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.260095  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260309  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.260516  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260716  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260828  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.261003  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:37.261211  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:37.261223  647891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 14:06:37.373077  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739196397.346971659
	
	I0210 14:06:37.373102  647891 fix.go:216] guest clock: 1739196397.346971659
	I0210 14:06:37.373109  647891 fix.go:229] Guest: 2025-02-10 14:06:37.346971659 +0000 UTC Remote: 2025-02-10 14:06:37.256999277 +0000 UTC m=+20.051538196 (delta=89.972382ms)
	I0210 14:06:37.373144  647891 fix.go:200] guest clock delta is within tolerance: 89.972382ms
	I0210 14:06:37.373150  647891 start.go:83] releasing machines lock for "default-k8s-diff-port-991097", held for 20.026829951s
	I0210 14:06:37.373175  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.373444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:37.376107  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376494  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.376541  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377209  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377404  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377534  647891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 14:06:37.377589  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.377646  647891 ssh_runner.go:195] Run: cat /version.json
	I0210 14:06:37.377676  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.380159  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380557  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380597  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380714  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.380818  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.380991  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.381076  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381150  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.381210  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.381236  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381376  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.461615  647891 ssh_runner.go:195] Run: systemctl --version
	I0210 14:06:37.484185  647891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 14:06:37.626066  647891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 14:06:37.632178  647891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 14:06:37.632269  647891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 14:06:37.649096  647891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 14:06:37.649125  647891 start.go:495] detecting cgroup driver to use...
	I0210 14:06:37.649207  647891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 14:06:37.666251  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 14:06:37.680465  647891 docker.go:217] disabling cri-docker service (if available) ...
	I0210 14:06:37.680513  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 14:06:37.694090  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 14:06:37.707550  647891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 14:06:37.831118  647891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 14:06:37.980607  647891 docker.go:233] disabling docker service ...
	I0210 14:06:37.980676  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 14:06:37.995113  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 14:06:38.009358  647891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 14:06:38.140399  647891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 14:06:38.254033  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 14:06:38.267735  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 14:06:38.286239  647891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 14:06:38.286326  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.296619  647891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 14:06:38.296675  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.306712  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.316772  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.326918  647891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 14:06:38.337280  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.347440  647891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.364350  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.374474  647891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 14:06:38.383773  647891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 14:06:38.383822  647891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 14:06:38.397731  647891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 14:06:38.407296  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:38.518444  647891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 14:06:38.609821  647891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 14:06:38.609897  647891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 14:06:38.614975  647891 start.go:563] Will wait 60s for crictl version
	I0210 14:06:38.615032  647891 ssh_runner.go:195] Run: which crictl
	I0210 14:06:38.618907  647891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 14:06:38.666752  647891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 14:06:38.666843  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.695436  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.724290  647891 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 14:06:38.725705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:38.728442  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728769  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:38.728804  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728997  647891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 14:06:38.733358  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:38.746088  647891 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991
097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 14:06:38.746232  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:38.746279  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:38.785698  647891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 14:06:38.785767  647891 ssh_runner.go:195] Run: which lz4
	I0210 14:06:38.790230  647891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 14:06:38.794584  647891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 14:06:38.794612  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 14:06:40.165093  647891 crio.go:462] duration metric: took 1.374905922s to copy over tarball
	I0210 14:06:40.165182  647891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 14:06:42.267000  647891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10178421s)
	I0210 14:06:42.267031  647891 crio.go:469] duration metric: took 2.101903432s to extract the tarball
	I0210 14:06:42.267039  647891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 14:06:42.304364  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:42.347839  647891 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 14:06:42.347867  647891 cache_images.go:84] Images are preloaded, skipping loading
	I0210 14:06:42.347877  647891 kubeadm.go:934] updating node { 192.168.39.38 8444 v1.32.1 crio true true} ...
	I0210 14:06:42.347999  647891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 14:06:42.348081  647891 ssh_runner.go:195] Run: crio config
	I0210 14:06:42.392127  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:42.392155  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:42.392168  647891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 14:06:42.392205  647891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991097 NodeName:default-k8s-diff-port-991097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 14:06:42.392445  647891 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991097"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.38"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 14:06:42.392531  647891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 14:06:42.402790  647891 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 14:06:42.402866  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 14:06:42.412691  647891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0210 14:06:42.430227  647891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 14:06:42.447018  647891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0210 14:06:42.463855  647891 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0210 14:06:42.467830  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:42.479887  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:42.616347  647891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:06:42.633982  647891 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097 for IP: 192.168.39.38
	I0210 14:06:42.634012  647891 certs.go:194] generating shared ca certs ...
	I0210 14:06:42.634036  647891 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:42.634251  647891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 14:06:42.634325  647891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 14:06:42.634339  647891 certs.go:256] generating profile certs ...
	I0210 14:06:42.634464  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.key
	I0210 14:06:42.634547  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key.653a5b77
	I0210 14:06:42.634633  647891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key
	I0210 14:06:42.634756  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 14:06:42.634790  647891 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 14:06:42.634804  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 14:06:42.634842  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 14:06:42.634877  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 14:06:42.634931  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 14:06:42.634990  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:42.635813  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 14:06:42.683471  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 14:06:42.717348  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 14:06:42.753582  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 14:06:42.786140  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0210 14:06:42.826849  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 14:06:42.854467  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 14:06:42.880065  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 14:06:42.907119  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 14:06:42.930542  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 14:06:42.953922  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 14:06:42.976830  647891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 14:06:42.993090  647891 ssh_runner.go:195] Run: openssl version
	I0210 14:06:42.999059  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 14:06:43.010187  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014640  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014690  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.020392  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 14:06:43.031108  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 14:06:43.041766  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046208  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046242  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.051895  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 14:06:43.062587  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 14:06:43.073217  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077547  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077594  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.083004  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 14:06:43.093687  647891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 14:06:43.098273  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 14:06:43.103884  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 14:06:43.109468  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 14:06:43.114957  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 14:06:43.120594  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 14:06:43.126311  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 14:06:43.132094  647891 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:43.132170  647891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 14:06:43.132205  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.170719  647891 cri.go:89] found id: ""
	I0210 14:06:43.170794  647891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 14:06:43.181310  647891 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 14:06:43.181333  647891 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 14:06:43.181378  647891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 14:06:43.191081  647891 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 14:06:43.191662  647891 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-991097" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:43.191931  647891 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-991097" cluster setting kubeconfig missing "default-k8s-diff-port-991097" context setting]
	I0210 14:06:43.192424  647891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:43.193695  647891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 14:06:43.203483  647891 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0210 14:06:43.203510  647891 kubeadm.go:1160] stopping kube-system containers ...
	I0210 14:06:43.203522  647891 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 14:06:43.203565  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.248106  647891 cri.go:89] found id: ""
	I0210 14:06:43.248168  647891 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 14:06:43.264683  647891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:06:43.274810  647891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:06:43.274837  647891 kubeadm.go:157] found existing configuration files:
	
	I0210 14:06:43.274893  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0210 14:06:43.284346  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:06:43.284394  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:06:43.294116  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0210 14:06:43.303692  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:06:43.303743  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:06:43.313293  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.322835  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:06:43.322893  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.332538  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0210 14:06:43.341968  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:06:43.342030  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:06:43.351997  647891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:06:43.361911  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:43.471810  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.088121  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.292411  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.357453  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.447107  647891 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:06:44.447198  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:44.947672  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.447925  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.947638  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.447630  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.504665  647891 api_server.go:72] duration metric: took 2.057554604s to wait for apiserver process to appear ...
	I0210 14:06:46.504702  647891 api_server.go:88] waiting for apiserver healthz status ...
	I0210 14:06:46.504729  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:46.505324  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:06:47.005003  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:52.009445  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:52.009499  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:57.013020  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:57.013089  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:02.016406  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:02.016462  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005061  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": read tcp 192.168.39.1:37208->192.168.39.38:8444: read: connection reset by peer
	I0210 14:07:07.005127  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005704  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:10.919140  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:07:10.919450  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:07:10.919470  644218 kubeadm.go:310] 
	I0210 14:07:10.919531  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:07:10.919612  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:07:10.919643  644218 kubeadm.go:310] 
	I0210 14:07:10.919696  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:07:10.919740  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:07:10.919898  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:07:10.919908  644218 kubeadm.go:310] 
	I0210 14:07:10.920052  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:07:10.920108  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:07:10.920160  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:07:10.920171  644218 kubeadm.go:310] 
	I0210 14:07:10.920344  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:07:10.920471  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:07:10.920487  644218 kubeadm.go:310] 
	I0210 14:07:10.920637  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:07:10.920748  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:07:10.920852  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:07:10.920956  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:07:10.920968  644218 kubeadm.go:310] 
	I0210 14:07:10.921451  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:10.921558  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:07:10.921647  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 14:07:10.921820  644218 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 14:07:10.921873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:07:11.388800  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:07:11.404434  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:07:11.415583  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:07:11.415609  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:07:11.415668  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:07:11.425343  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:07:11.425411  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:07:11.435126  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:07:11.444951  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:07:11.445016  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:07:11.454675  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.463839  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:07:11.463923  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.473621  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:07:11.482802  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:07:11.482864  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:07:11.492269  644218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:07:11.706383  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:07.505081  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.505697  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:08.005039  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:13.005418  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:13.005503  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:18.006035  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:18.006088  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:23.006412  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:23.006480  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:24.990987  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:24.991022  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:24.991041  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.094135  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.094175  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.094195  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.134411  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.134448  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.505023  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.510502  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:25.510542  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.004985  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.016527  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:26.016561  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.505209  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.512830  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 200:
	ok
	I0210 14:07:26.519490  647891 api_server.go:141] control plane version: v1.32.1
	I0210 14:07:26.519519  647891 api_server.go:131] duration metric: took 40.01480806s to wait for apiserver health ...
	I0210 14:07:26.519531  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:07:26.519541  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:07:26.521665  647891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 14:07:26.523188  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 14:07:26.534793  647891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 14:07:26.555242  647891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 14:07:26.560044  647891 system_pods.go:59] 8 kube-system pods found
	I0210 14:07:26.560088  647891 system_pods.go:61] "coredns-668d6bf9bc-chvvk" [81bc9af8-1dbc-4299-9818-c5e28cd527a4] Running
	I0210 14:07:26.560096  647891 system_pods.go:61] "etcd-default-k8s-diff-port-991097" [d7991f48-f3f9-4585-9d42-8ac10fb95d65] Running
	I0210 14:07:26.560105  647891 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991097" [91a8d2ac-4127-4e49-a21e-95babe7078b1] Running
	I0210 14:07:26.560113  647891 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991097" [12fbb1be-d90f-47b2-a6e6-5d541e1c9cd3] Running
	I0210 14:07:26.560128  647891 system_pods.go:61] "kube-proxy-k94kp" [82230795-ec36-4619-a8bd-6b1520b2dcce] Running
	I0210 14:07:26.560133  647891 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991097" [98c775ff-82f9-42b1-a3ba-4a2d1830f6fc] Running
	I0210 14:07:26.560139  647891 system_pods.go:61] "metrics-server-f79f97bbb-j7gwv" [20814b8f-e1ca-4d3e-baa2-83fa85d5055e] Pending
	I0210 14:07:26.560144  647891 system_pods.go:61] "storage-provisioner" [f31ad609-ca85-4fbb-9fa7-b0fd93d6b504] Running
	I0210 14:07:26.560152  647891 system_pods.go:74] duration metric: took 4.884117ms to wait for pod list to return data ...
	I0210 14:07:26.560166  647891 node_conditions.go:102] verifying NodePressure condition ...
	I0210 14:07:26.563732  647891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 14:07:26.563765  647891 node_conditions.go:123] node cpu capacity is 2
	I0210 14:07:26.563783  647891 node_conditions.go:105] duration metric: took 3.607402ms to run NodePressure ...
	I0210 14:07:26.563811  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:07:26.839281  647891 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 14:07:26.842184  647891 retry.go:31] will retry after 267.442504ms: kubelet not initialised
	I0210 14:07:27.114654  647891 retry.go:31] will retry after 460.309798ms: kubelet not initialised
	I0210 14:07:27.580487  647891 retry.go:31] will retry after 468.648016ms: kubelet not initialised
	I0210 14:07:28.052957  647891 retry.go:31] will retry after 634.581788ms: kubelet not initialised
	I0210 14:07:28.692193  647891 retry.go:31] will retry after 1.585469768s: kubelet not initialised
	I0210 14:07:30.280814  647891 retry.go:31] will retry after 1.746270708s: kubelet not initialised
	I0210 14:07:32.035943  647891 kubeadm.go:739] kubelet initialised
	I0210 14:07:32.035970  647891 kubeadm.go:740] duration metric: took 5.19665458s waiting for restarted kubelet to initialise ...
	I0210 14:07:32.035982  647891 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:07:32.039938  647891 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045939  647891 pod_ready.go:93] pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.045973  647891 pod_ready.go:82] duration metric: took 2.006006864s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045988  647891 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049855  647891 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.049879  647891 pod_ready.go:82] duration metric: took 3.881494ms for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049892  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053608  647891 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.053629  647891 pod_ready.go:82] duration metric: took 3.729266ms for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053642  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:36.060444  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:38.560369  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:41.059206  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:43.059645  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:44.559464  647891 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.559497  647891 pod_ready.go:82] duration metric: took 10.505846034s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.559509  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563350  647891 pod_ready.go:93] pod "kube-proxy-k94kp" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.563377  647891 pod_ready.go:82] duration metric: took 3.859986ms for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563391  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567231  647891 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.567251  647891 pod_ready.go:82] duration metric: took 3.851395ms for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567263  647891 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:46.573010  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:49.073487  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:51.075217  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:53.573637  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:56.072364  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:58.073033  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:00.074357  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:02.574325  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:05.074157  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:07.074228  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:09.572654  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:11.573678  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:14.071655  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:16.072359  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:18.074418  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:20.572441  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:22.573381  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:25.073116  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:27.571988  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:29.573021  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:32.072192  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:34.073218  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:36.073606  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:38.573206  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:41.073455  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:43.572727  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:45.573114  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:48.072635  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:50.072982  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:52.572772  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:55.072938  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:57.073602  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:59.572429  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:01.572682  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:03.572760  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:06.073768  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:07.694951  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:09:07.695080  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 14:09:07.696680  644218 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 14:09:07.696776  644218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:09:07.696928  644218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:09:07.697091  644218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:09:07.697242  644218 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 14:09:07.697319  644218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:09:07.698867  644218 out.go:235]   - Generating certificates and keys ...
	I0210 14:09:07.698960  644218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:09:07.699052  644218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:09:07.699176  644218 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:09:07.699261  644218 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:09:07.699354  644218 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:09:07.699403  644218 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:09:07.699465  644218 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:09:07.699527  644218 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:09:07.699633  644218 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:09:07.699731  644218 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:09:07.699800  644218 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:09:07.699884  644218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:09:07.699960  644218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:09:07.700047  644218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:09:07.700138  644218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:09:07.700209  644218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:09:07.700322  644218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:09:07.700393  644218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:09:07.700436  644218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:09:07.700526  644218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:09:07.701917  644218 out.go:235]   - Booting up control plane ...
	I0210 14:09:07.702014  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:09:07.702107  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:09:07.702184  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:09:07.702300  644218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:09:07.702455  644218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 14:09:07.702532  644218 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 14:09:07.702626  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.702845  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.702940  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703134  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703216  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703373  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703435  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703588  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703650  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703819  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703826  644218 kubeadm.go:310] 
	I0210 14:09:07.703859  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:09:07.703893  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:09:07.703900  644218 kubeadm.go:310] 
	I0210 14:09:07.703933  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:09:07.703994  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:09:07.704123  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:09:07.704131  644218 kubeadm.go:310] 
	I0210 14:09:07.704298  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:09:07.704355  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:09:07.704403  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:09:07.704413  644218 kubeadm.go:310] 
	I0210 14:09:07.704552  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:09:07.704673  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:09:07.704685  644218 kubeadm.go:310] 
	I0210 14:09:07.704841  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:09:07.704960  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:09:07.705074  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:09:07.705199  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:09:07.705210  644218 kubeadm.go:310] 
	I0210 14:09:07.705291  644218 kubeadm.go:394] duration metric: took 7m58.218613622s to StartCluster
	I0210 14:09:07.705343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:09:07.705405  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:09:07.750026  644218 cri.go:89] found id: ""
	I0210 14:09:07.750054  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.750063  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:09:07.750070  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:09:07.750136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:09:07.793341  644218 cri.go:89] found id: ""
	I0210 14:09:07.793374  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.793386  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:09:07.793395  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:09:07.793455  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:09:07.835496  644218 cri.go:89] found id: ""
	I0210 14:09:07.835521  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.835538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:09:07.835543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:09:07.835620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:09:07.869619  644218 cri.go:89] found id: ""
	I0210 14:09:07.869655  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.869663  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:09:07.869669  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:09:07.869735  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:09:07.927211  644218 cri.go:89] found id: ""
	I0210 14:09:07.927243  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.927253  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:09:07.927261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:09:07.927331  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:09:07.966320  644218 cri.go:89] found id: ""
	I0210 14:09:07.966355  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.966365  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:09:07.966374  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:09:07.966437  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:09:07.999268  644218 cri.go:89] found id: ""
	I0210 14:09:07.999302  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.999313  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:09:07.999321  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:09:07.999389  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:09:08.039339  644218 cri.go:89] found id: ""
	I0210 14:09:08.039371  644218 logs.go:282] 0 containers: []
	W0210 14:09:08.039380  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:09:08.039391  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:09:08.039404  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:09:08.091644  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:09:08.091675  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:09:08.105318  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:09:08.105346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:09:08.182104  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:09:08.182127  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:09:08.182140  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:09:08.287929  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:09:08.287974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 14:09:08.331764  644218 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 14:09:08.331884  644218 out.go:270] * 
	W0210 14:09:08.332053  644218 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.332079  644218 out.go:270] * 
	W0210 14:09:08.333029  644218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 14:09:08.336162  644218 out.go:201] 
	W0210 14:09:08.337200  644218 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.337269  644218 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 14:09:08.337316  644218 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 14:09:08.339083  644218 out.go:201] 
	I0210 14:09:08.574570  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:11.072543  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:13.573301  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:15.573572  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:18.073259  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:20.075503  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:22.573109  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:25.073412  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:27.573006  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:29.573328  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:31.574361  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:34.072762  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:36.574072  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:39.073539  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:41.573025  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:43.573580  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:46.072848  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:48.072967  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:50.573107  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:53.073370  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:55.573158  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:58.072342  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:00.072754  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:02.074034  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:04.074722  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:06.572250  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:08.572718  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:10.573231  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:12.573418  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:15.073637  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:17.573333  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:20.072833  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:22.572801  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:24.576464  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:27.073032  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:29.573284  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:32.073083  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:34.577658  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:37.072763  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:39.571996  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:41.572345  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:43.574031  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:46.073658  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:48.573611  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:51.072756  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:53.073565  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:55.572482  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:57.572577  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:00.072828  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:02.572873  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:04.573206  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:06.573564  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:09.072900  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:11.073012  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:13.073099  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:15.572178  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:17.572235  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:19.573626  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:22.072581  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:24.072885  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:26.073024  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:28.073396  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:30.573530  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:32.574839  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:35.073176  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:37.573717  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:40.072207  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:42.073250  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:44.073336  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:44.567861  647891 pod_ready.go:82] duration metric: took 4m0.000569197s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" ...
	E0210 14:11:44.567904  647891 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0210 14:11:44.567935  647891 pod_ready.go:39] duration metric: took 4m12.5319365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:11:44.567975  647891 kubeadm.go:597] duration metric: took 5m1.386634957s to restartPrimaryControlPlane
	W0210 14:11:44.568092  647891 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 14:11:44.568135  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:12:12.327344  647891 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.759174157s)
	I0210 14:12:12.327426  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:12:12.356706  647891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:12:12.370489  647891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:12:12.389582  647891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:12:12.389606  647891 kubeadm.go:157] found existing configuration files:
	
	I0210 14:12:12.389665  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0210 14:12:12.406178  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:12:12.406240  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:12:12.416269  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0210 14:12:12.425666  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:12:12.425722  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:12:12.442382  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0210 14:12:12.451653  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:12:12.451700  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:12:12.461152  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0210 14:12:12.470257  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:12:12.470309  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:12:12.479927  647891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:12:12.526468  647891 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 14:12:12.526533  647891 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:12:12.646027  647891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:12:12.646189  647891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:12:12.646291  647891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 14:12:12.657926  647891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:12:12.660818  647891 out.go:235]   - Generating certificates and keys ...
	I0210 14:12:12.660928  647891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:12:12.661022  647891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:12:12.661164  647891 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:12:12.661261  647891 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:12:12.661358  647891 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:12:12.661464  647891 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:12:12.661568  647891 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:12:12.661650  647891 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:12:12.661748  647891 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:12:12.661862  647891 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:12:12.661917  647891 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:12:12.661998  647891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:12:12.780092  647891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:12:12.997667  647891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 14:12:13.165032  647891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:12:13.297324  647891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:12:13.407861  647891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:12:13.408365  647891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:12:13.411477  647891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:12:13.413309  647891 out.go:235]   - Booting up control plane ...
	I0210 14:12:13.413450  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:12:13.413547  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:12:13.415050  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:12:13.433081  647891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:12:13.441419  647891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:12:13.441482  647891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:12:13.567261  647891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 14:12:13.567429  647891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 14:12:14.080023  647891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.899029ms
	I0210 14:12:14.080151  647891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 14:12:19.082293  647891 kubeadm.go:310] [api-check] The API server is healthy after 5.00209227s
	I0210 14:12:19.097053  647891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 14:12:19.128233  647891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 14:12:19.181291  647891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 14:12:19.181616  647891 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-991097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 14:12:19.194310  647891 kubeadm.go:310] [bootstrap-token] Using token: mnjk32.fgjackbr8f6xpsoe
	I0210 14:12:19.195599  647891 out.go:235]   - Configuring RBAC rules ...
	I0210 14:12:19.195756  647891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 14:12:19.207224  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 14:12:19.218283  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 14:12:19.223717  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 14:12:19.236200  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 14:12:19.244351  647891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 14:12:19.488623  647891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 14:12:19.926025  647891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 14:12:20.490610  647891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 14:12:20.490635  647891 kubeadm.go:310] 
	I0210 14:12:20.490702  647891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 14:12:20.490708  647891 kubeadm.go:310] 
	I0210 14:12:20.490797  647891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 14:12:20.490805  647891 kubeadm.go:310] 
	I0210 14:12:20.490826  647891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 14:12:20.490883  647891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 14:12:20.490951  647891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 14:12:20.490959  647891 kubeadm.go:310] 
	I0210 14:12:20.491041  647891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 14:12:20.491053  647891 kubeadm.go:310] 
	I0210 14:12:20.491096  647891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 14:12:20.491108  647891 kubeadm.go:310] 
	I0210 14:12:20.491216  647891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 14:12:20.491344  647891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 14:12:20.491441  647891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 14:12:20.491451  647891 kubeadm.go:310] 
	I0210 14:12:20.491568  647891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 14:12:20.491678  647891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 14:12:20.491690  647891 kubeadm.go:310] 
	I0210 14:12:20.491762  647891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token mnjk32.fgjackbr8f6xpsoe \
	I0210 14:12:20.491847  647891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 \
	I0210 14:12:20.491879  647891 kubeadm.go:310] 	--control-plane 
	I0210 14:12:20.491889  647891 kubeadm.go:310] 
	I0210 14:12:20.491958  647891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 14:12:20.491968  647891 kubeadm.go:310] 
	I0210 14:12:20.492034  647891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token mnjk32.fgjackbr8f6xpsoe \
	I0210 14:12:20.492133  647891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 
	I0210 14:12:20.493401  647891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:12:20.493482  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:12:20.493514  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:12:20.495183  647891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 14:12:20.496353  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 14:12:20.509131  647891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 14:12:20.529282  647891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 14:12:20.529370  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:20.529403  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-991097 minikube.k8s.io/updated_at=2025_02_10T14_12_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=default-k8s-diff-port-991097 minikube.k8s.io/primary=true
	I0210 14:12:20.544926  647891 ops.go:34] apiserver oom_adj: -16
	I0210 14:12:20.760939  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:21.261178  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:21.761401  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:22.262028  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:22.761178  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.261587  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.761361  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.872332  647891 kubeadm.go:1113] duration metric: took 3.343041771s to wait for elevateKubeSystemPrivileges
	I0210 14:12:23.872373  647891 kubeadm.go:394] duration metric: took 5m40.740283252s to StartCluster
	I0210 14:12:23.872399  647891 settings.go:142] acquiring lock: {Name:mk7daa7e5390489a50205707c4b69542e21eb74b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:12:23.872537  647891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:12:23.873372  647891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:12:23.873648  647891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 14:12:23.873753  647891 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 14:12:23.873853  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:12:23.873887  647891 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873905  647891 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873913  647891 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873922  647891 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873927  647891 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.873938  647891 addons.go:247] addon dashboard should already be in state true
	I0210 14:12:23.873939  647891 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.873952  647891 addons.go:247] addon metrics-server should already be in state true
	I0210 14:12:23.873952  647891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991097"
	I0210 14:12:23.873979  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.873988  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.873912  647891 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.874043  647891 addons.go:247] addon storage-provisioner should already be in state true
	I0210 14:12:23.874086  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.874363  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874413  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874364  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874366  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874488  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874496  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874547  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874552  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874949  647891 out.go:177] * Verifying Kubernetes components...
	I0210 14:12:23.876130  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:12:23.890806  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0210 14:12:23.890815  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0210 14:12:23.891456  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.891467  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.891547  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0210 14:12:23.892086  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892113  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.892140  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.892234  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892260  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.892620  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.892677  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.892756  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892784  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.893133  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.893208  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893259  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.893279  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893318  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.893670  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893725  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.895428  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0210 14:12:23.895965  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.896617  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.896644  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.897105  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.897324  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.900256  647891 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.900294  647891 addons.go:247] addon default-storageclass should already be in state true
	I0210 14:12:23.900324  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.900640  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.900680  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.912795  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0210 14:12:23.913054  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0210 14:12:23.913323  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.913658  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.913858  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.913884  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.914232  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.914252  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.914320  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.914363  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0210 14:12:23.914498  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.914699  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.914829  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.915120  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.915140  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.915307  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.915673  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.915884  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.916520  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.916649  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0210 14:12:23.917160  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.917660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.917859  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.917986  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.918005  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.918315  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.918450  647891 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 14:12:23.918704  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.918848  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.919214  647891 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 14:12:23.919222  647891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:12:23.920579  647891 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 14:12:23.920587  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 14:12:23.920608  647891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 14:12:23.920624  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.920692  647891 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 14:12:23.920705  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 14:12:23.920723  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.921692  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 14:12:23.921732  647891 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 14:12:23.921750  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.924480  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925009  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.925039  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925163  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925210  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.925419  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.925609  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.926027  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.926047  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.926098  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.926117  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926143  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926326  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.926439  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.926472  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926485  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.926693  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.926752  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.927013  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.927142  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.927264  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.936911  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0210 14:12:23.937355  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.937894  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.937916  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.938211  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.938418  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.939948  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.940165  647891 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 14:12:23.940182  647891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 14:12:23.940201  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.943037  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.943450  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.943481  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.943582  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.943751  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.943873  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.944008  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:24.052454  647891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:12:24.072649  647891 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991097" to be "Ready" ...
	I0210 14:12:24.096932  647891 node_ready.go:49] node "default-k8s-diff-port-991097" has status "Ready":"True"
	I0210 14:12:24.096960  647891 node_ready.go:38] duration metric: took 24.264753ms for node "default-k8s-diff-port-991097" to be "Ready" ...
	I0210 14:12:24.096970  647891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:12:24.099847  647891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:24.138048  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 14:12:24.138085  647891 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 14:12:24.138256  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 14:12:24.141886  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 14:12:24.166242  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 14:12:24.166277  647891 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 14:12:24.206775  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 14:12:24.206801  647891 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 14:12:24.226641  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 14:12:24.226667  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 14:12:24.245183  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 14:12:24.245208  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 14:12:24.278451  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 14:12:24.278493  647891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 14:12:24.306222  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 14:12:24.306256  647891 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 14:12:24.342596  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 14:12:24.342630  647891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 14:12:24.399654  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.399689  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.400136  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.400160  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.400175  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.400184  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.400477  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.400499  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.400506  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:24.418656  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.418678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.418964  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.418997  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:24.419003  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.435724  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 14:12:24.435747  647891 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 14:12:24.449333  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 14:12:24.528025  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 14:12:24.528058  647891 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 14:12:24.616254  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 14:12:24.616294  647891 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 14:12:24.710068  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 14:12:24.710108  647891 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 14:12:24.828857  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 14:12:25.153728  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.011798056s)
	I0210 14:12:25.153806  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.153823  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.154144  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.154171  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.154187  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.154203  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.154213  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.154482  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.154482  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.154508  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587013  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.137616057s)
	I0210 14:12:25.587092  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.587120  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.587435  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.587489  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587532  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.587586  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.587599  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.587870  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.587920  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587928  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.587937  647891 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-991097"
	I0210 14:12:26.117299  647891 pod_ready.go:103] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:12:27.032443  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.203528578s)
	I0210 14:12:27.032495  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:27.032511  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:27.032867  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:27.032888  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:27.032896  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:27.032901  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:27.033216  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:27.033248  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:27.033245  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:27.035191  647891 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-991097 addons enable metrics-server
	
	I0210 14:12:27.036488  647891 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0210 14:12:27.037799  647891 addons.go:514] duration metric: took 3.16405216s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0210 14:12:28.105934  647891 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.105960  647891 pod_ready.go:82] duration metric: took 4.006089526s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.105971  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.111533  647891 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.111558  647891 pod_ready.go:82] duration metric: took 5.581237ms for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.111568  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.116636  647891 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.116668  647891 pod_ready.go:82] duration metric: took 5.091992ms for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.116681  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:30.123433  647891 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:12:31.624379  647891 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:31.624406  647891 pod_ready.go:82] duration metric: took 3.507715801s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:31.624414  647891 pod_ready.go:39] duration metric: took 7.527433406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:12:31.624430  647891 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:12:31.624479  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:12:31.647536  647891 api_server.go:72] duration metric: took 7.773850883s to wait for apiserver process to appear ...
	I0210 14:12:31.647560  647891 api_server.go:88] waiting for apiserver healthz status ...
	I0210 14:12:31.647580  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:12:31.652686  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 200:
	ok
	I0210 14:12:31.653569  647891 api_server.go:141] control plane version: v1.32.1
	I0210 14:12:31.653594  647891 api_server.go:131] duration metric: took 6.025911ms to wait for apiserver health ...
	I0210 14:12:31.653604  647891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 14:12:31.656891  647891 system_pods.go:59] 9 kube-system pods found
	I0210 14:12:31.656923  647891 system_pods.go:61] "coredns-668d6bf9bc-28wch" [927d1cd9-ae9d-4278-84d5-5bd3239cd786] Running
	I0210 14:12:31.656928  647891 system_pods.go:61] "coredns-668d6bf9bc-nmbcp" [2c1a705f-ab6a-41ef-a4d9-50e3ca250ed9] Running
	I0210 14:12:31.656931  647891 system_pods.go:61] "etcd-default-k8s-diff-port-991097" [b0b539ce-5f91-40a5-8d70-0a75dfe2ed6a] Running
	I0210 14:12:31.656935  647891 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991097" [671ba619-e5e2-4907-a13d-2c67be54a92e] Running
	I0210 14:12:31.656938  647891 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991097" [8c8cb8cc-e70f-4f8f-8f9b-05c43759c492] Running
	I0210 14:12:31.656941  647891 system_pods.go:61] "kube-proxy-q4hfw" [4be41fa0-22f6-412b-87ef-c7348699fc31] Running
	I0210 14:12:31.656947  647891 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991097" [ea3d0af7-156b-444d-967e-67226742cbe7] Running
	I0210 14:12:31.656957  647891 system_pods.go:61] "metrics-server-f79f97bbb-88dls" [61895ed1-ecb5-4d33-94bd-1c8c73f7ed51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 14:12:31.656967  647891 system_pods.go:61] "storage-provisioner" [5684753f-8a90-4d05-9562-5dd0d567de4a] Running
	I0210 14:12:31.656976  647891 system_pods.go:74] duration metric: took 3.364979ms to wait for pod list to return data ...
	I0210 14:12:31.656984  647891 default_sa.go:34] waiting for default service account to be created ...
	I0210 14:12:31.665386  647891 default_sa.go:45] found service account: "default"
	I0210 14:12:31.665409  647891 default_sa.go:55] duration metric: took 8.414491ms for default service account to be created ...
	I0210 14:12:31.665416  647891 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 14:12:31.668407  647891 system_pods.go:86] 9 kube-system pods found
	I0210 14:12:31.668429  647891 system_pods.go:89] "coredns-668d6bf9bc-28wch" [927d1cd9-ae9d-4278-84d5-5bd3239cd786] Running
	I0210 14:12:31.668435  647891 system_pods.go:89] "coredns-668d6bf9bc-nmbcp" [2c1a705f-ab6a-41ef-a4d9-50e3ca250ed9] Running
	I0210 14:12:31.668439  647891 system_pods.go:89] "etcd-default-k8s-diff-port-991097" [b0b539ce-5f91-40a5-8d70-0a75dfe2ed6a] Running
	I0210 14:12:31.668443  647891 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991097" [671ba619-e5e2-4907-a13d-2c67be54a92e] Running
	I0210 14:12:31.668447  647891 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991097" [8c8cb8cc-e70f-4f8f-8f9b-05c43759c492] Running
	I0210 14:12:31.668450  647891 system_pods.go:89] "kube-proxy-q4hfw" [4be41fa0-22f6-412b-87ef-c7348699fc31] Running
	I0210 14:12:31.668453  647891 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991097" [ea3d0af7-156b-444d-967e-67226742cbe7] Running
	I0210 14:12:31.668459  647891 system_pods.go:89] "metrics-server-f79f97bbb-88dls" [61895ed1-ecb5-4d33-94bd-1c8c73f7ed51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 14:12:31.668463  647891 system_pods.go:89] "storage-provisioner" [5684753f-8a90-4d05-9562-5dd0d567de4a] Running
	I0210 14:12:31.668472  647891 system_pods.go:126] duration metric: took 3.049778ms to wait for k8s-apps to be running ...
	I0210 14:12:31.668480  647891 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 14:12:31.668529  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:12:31.715692  647891 system_svc.go:56] duration metric: took 47.199919ms WaitForService to wait for kubelet
	I0210 14:12:31.715721  647891 kubeadm.go:582] duration metric: took 7.842039698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:12:31.715745  647891 node_conditions.go:102] verifying NodePressure condition ...
	I0210 14:12:31.718389  647891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 14:12:31.718422  647891 node_conditions.go:123] node cpu capacity is 2
	I0210 14:12:31.718442  647891 node_conditions.go:105] duration metric: took 2.692752ms to run NodePressure ...
	I0210 14:12:31.718453  647891 start.go:241] waiting for startup goroutines ...
	I0210 14:12:31.718463  647891 start.go:246] waiting for cluster config update ...
	I0210 14:12:31.718473  647891 start.go:255] writing updated cluster config ...
	I0210 14:12:31.718739  647891 ssh_runner.go:195] Run: rm -f paused
	I0210 14:12:31.773099  647891 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 14:12:31.774883  647891 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.749632415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197090749607120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=192775ca-0000-40d3-8d18-d7daafc9192c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.750112685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=603384d6-28c8-4bc2-b0c6-3bdf569abaf2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.750180947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=603384d6-28c8-4bc2-b0c6-3bdf569abaf2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.750231633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=603384d6-28c8-4bc2-b0c6-3bdf569abaf2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.781601396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd3064d6-3442-4c9e-b45c-b1fbc3d34a75 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.781697853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd3064d6-3442-4c9e-b45c-b1fbc3d34a75 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.783431149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffc55e83-71d8-44f2-bbc6-35b8f18216b6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.783955819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197090783930274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffc55e83-71d8-44f2-bbc6-35b8f18216b6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.784571852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a764e52-0b5d-4844-a848-0f49f4944100 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.784645150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a764e52-0b5d-4844-a848-0f49f4944100 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.784694582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3a764e52-0b5d-4844-a848-0f49f4944100 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.818318748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4b08561-65fd-481e-8fba-6f373367915e name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.818408860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4b08561-65fd-481e-8fba-6f373367915e name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.819931998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5eaf75c5-0f73-45cd-9b82-86867803fd1a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.820316630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197090820285217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5eaf75c5-0f73-45cd-9b82-86867803fd1a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.821035227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f2a80ea-a39d-431b-9f33-a083db1449b8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.821084444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f2a80ea-a39d-431b-9f33-a083db1449b8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.821117539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9f2a80ea-a39d-431b-9f33-a083db1449b8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.852470171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40258fb7-5e59-4944-9b72-7d6aa203d6ea name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.852611234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40258fb7-5e59-4944-9b72-7d6aa203d6ea name=/runtime.v1.RuntimeService/Version
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.853695951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcaf7016-42bc-42a0-ae39-b8acfbbcbefa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.854076691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197090854056634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcaf7016-42bc-42a0-ae39-b8acfbbcbefa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.854622183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6526f7c-c5e6-40df-90fb-32e200dd801c name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.854704058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6526f7c-c5e6-40df-90fb-32e200dd801c name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:18:10 old-k8s-version-643105 crio[627]: time="2025-02-10 14:18:10.854751990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6526f7c-c5e6-40df-90fb-32e200dd801c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 14:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053008] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041885] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.089243] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.827394] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420700] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 14:01] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.115834] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.165556] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132361] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.253937] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.572683] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.063864] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.037525] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +14.230154] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 14:05] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Feb10 14:07] systemd-fstab-generator[5274]: Ignoring "noauto" option for root device
	[  +0.063648] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:18:11 up 17 min,  0 users,  load average: 0.08, 0.04, 0.05
	Linux old-k8s-version-643105 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 14:18:08 old-k8s-version-643105 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: net.(*Resolver).lookupIP(0x70c5740, 0x4f7fdc0, 0xc0007f8e00, 0x48ab5d6, 0x3, 0xc000b52ab0, 0x1f, 0x0, 0x0, 0x787d12, ...)
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/net/lookup_unix.go:96 +0x187
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: net.glob..func1(0x4f7fdc0, 0xc0007f8e00, 0xc0007eddf0, 0x48ab5d6, 0x3, 0xc000b52ab0, 0x1f, 0xc000120018, 0x0, 0xc0007f7da0, ...)
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/net/hook.go:23 +0x72
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/net/lookup.go:293 +0xb9
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc0009d9540, 0xc000b52ae0, 0x23, 0xc0007f8e40)
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: created by internal/singleflight.(*Group).DoChan
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: goroutine 163 [runnable]:
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: net.cgoIPLookup(0xc000b7d500, 0x48ab5d6, 0x3, 0xc000b52ab0, 0x1f)
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/net/cgo_unix.go:217
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]: created by net.cgoLookupIP
	Feb 10 14:18:08 old-k8s-version-643105 kubelet[6453]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Feb 10 14:18:08 old-k8s-version-643105 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 14:18:09 old-k8s-version-643105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 10 14:18:09 old-k8s-version-643105 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 14:18:09 old-k8s-version-643105 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 14:18:09 old-k8s-version-643105 kubelet[6461]: I0210 14:18:09.382902    6461 server.go:416] Version: v1.20.0
	Feb 10 14:18:09 old-k8s-version-643105 kubelet[6461]: I0210 14:18:09.383165    6461 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 14:18:09 old-k8s-version-643105 kubelet[6461]: I0210 14:18:09.385211    6461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 14:18:09 old-k8s-version-643105 kubelet[6461]: I0210 14:18:09.386260    6461 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 10 14:18:09 old-k8s-version-643105 kubelet[6461]: W0210 14:18:09.386282    6461 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (229.924775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-643105" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (378.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:18:16.655785  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:19:00.165176  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:19:34.780445  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:19:35.565580  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:19:47.443034  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:20:02.481823  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:20:33.642438  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:20:43.815616  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:21:18.942966  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:21:27.875032  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:22:13.585346  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:22:44.327466  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:22:50.941052  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:23:16.656628  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:23:36.717667  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
E0210 14:24:00.165502  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.78:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.78:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (236.372001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-643105" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-643105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-643105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.649µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-643105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (230.244052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-643105 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:03 UTC | 10 Feb 25 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | no-preload-264648 image list                           | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p no-preload-264648                                   | no-preload-264648            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-372614 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | disable-driver-mounts-372614                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-991097  | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:06 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-187291             | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:04 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-187291                  | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-187291 --memory=2200 --alsologtostderr   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-187291 image list                           | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| delete  | -p newest-cni-187291                                   | newest-cni-187291            | jenkins | v1.35.0 | 10 Feb 25 14:05 UTC | 10 Feb 25 14:05 UTC |
	| addons  | enable dashboard -p default-k8s-diff-port-991097       | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC | 10 Feb 25 14:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:06 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-991097                           | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-991097 | jenkins | v1.35.0 | 10 Feb 25 14:12 UTC | 10 Feb 25 14:12 UTC |
	|         | default-k8s-diff-port-991097                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 14:06:17
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 14:06:17.243747  647891 out.go:345] Setting OutFile to fd 1 ...
	I0210 14:06:17.244049  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244060  647891 out.go:358] Setting ErrFile to fd 2...
	I0210 14:06:17.244065  647891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 14:06:17.244273  647891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 14:06:17.244886  647891 out.go:352] Setting JSON to false
	I0210 14:06:17.245898  647891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13722,"bootTime":1739182655,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 14:06:17.246027  647891 start.go:139] virtualization: kvm guest
	I0210 14:06:17.248712  647891 out.go:177] * [default-k8s-diff-port-991097] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 14:06:17.249739  647891 notify.go:220] Checking for updates...
	I0210 14:06:17.249783  647891 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 14:06:17.250816  647891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 14:06:17.251974  647891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:17.252995  647891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 14:06:17.254055  647891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 14:06:17.255160  647891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 14:06:17.256646  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:17.257053  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.257103  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.272251  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0210 14:06:17.272688  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.273235  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.273265  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.273611  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.273803  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.274066  647891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 14:06:17.274374  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.274410  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.289090  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0210 14:06:17.289485  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.289921  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.289940  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.290252  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.290429  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.324494  647891 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 14:06:17.325653  647891 start.go:297] selected driver: kvm2
	I0210 14:06:17.325667  647891 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.325821  647891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 14:06:17.326767  647891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.326863  647891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 14:06:17.341811  647891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 14:06:17.342243  647891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:06:17.342292  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:17.342352  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:17.342403  647891 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:17.342546  647891 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 14:06:17.344647  647891 out.go:177] * Starting "default-k8s-diff-port-991097" primary control-plane node in "default-k8s-diff-port-991097" cluster
	I0210 14:06:17.345834  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:17.345863  647891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 14:06:17.345881  647891 cache.go:56] Caching tarball of preloaded images
	I0210 14:06:17.345970  647891 preload.go:172] Found /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0210 14:06:17.345985  647891 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0210 14:06:17.346082  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:17.346270  647891 start.go:360] acquireMachinesLock for default-k8s-diff-port-991097: {Name:mk8965eeb51c8b935262413ef180599688209442 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 14:06:17.346312  647891 start.go:364] duration metric: took 22.484µs to acquireMachinesLock for "default-k8s-diff-port-991097"
	I0210 14:06:17.346326  647891 start.go:96] Skipping create...Using existing machine configuration
	I0210 14:06:17.346396  647891 fix.go:54] fixHost starting: 
	I0210 14:06:17.346671  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:06:17.346702  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:06:17.362026  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0210 14:06:17.362460  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:06:17.362937  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:06:17.362960  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:06:17.363308  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:06:17.363509  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:17.363660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:06:17.365186  647891 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991097: state=Stopped err=<nil>
	I0210 14:06:17.365227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	W0210 14:06:17.365370  647891 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 14:06:17.367081  647891 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-991097" ...
	I0210 14:06:17.368184  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Start
	I0210 14:06:17.368392  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) starting domain...
	I0210 14:06:17.368412  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) ensuring networks are active...
	I0210 14:06:17.369033  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network default is active
	I0210 14:06:17.369340  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Ensuring network mk-default-k8s-diff-port-991097 is active
	I0210 14:06:17.369654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) getting domain XML...
	I0210 14:06:17.370420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) creating domain...
	I0210 14:06:18.584048  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for IP...
	I0210 14:06:18.584938  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585440  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.585547  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.585443  647926 retry.go:31] will retry after 284.933629ms: waiting for domain to come up
	I0210 14:06:18.872073  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872628  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:18.872654  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:18.872603  647926 retry.go:31] will retry after 252.055679ms: waiting for domain to come up
	I0210 14:06:19.125837  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126311  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.126344  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.126282  647926 retry.go:31] will retry after 411.979825ms: waiting for domain to come up
	I0210 14:06:19.540074  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.540658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.540586  647926 retry.go:31] will retry after 404.768184ms: waiting for domain to come up
	I0210 14:06:19.947166  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947685  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:19.947741  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:19.947665  647926 retry.go:31] will retry after 556.378156ms: waiting for domain to come up
	I0210 14:06:20.505361  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505826  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:20.505867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:20.505784  647926 retry.go:31] will retry after 866.999674ms: waiting for domain to come up
	I0210 14:06:21.374890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375452  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:21.375483  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:21.375399  647926 retry.go:31] will retry after 773.54598ms: waiting for domain to come up
	I0210 14:06:22.150227  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150626  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:22.150649  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:22.150606  647926 retry.go:31] will retry after 1.159257258s: waiting for domain to come up
	I0210 14:06:23.311620  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:23.312231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:23.312136  647926 retry.go:31] will retry after 1.322774288s: waiting for domain to come up
	I0210 14:06:24.636617  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:24.637106  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:24.637035  647926 retry.go:31] will retry after 1.698355707s: waiting for domain to come up
	I0210 14:06:26.337653  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338239  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:26.338269  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:26.338193  647926 retry.go:31] will retry after 2.301675582s: waiting for domain to come up
	I0210 14:06:30.917338  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:06:30.917550  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:06:28.642137  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642701  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:28.642735  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:28.642637  647926 retry.go:31] will retry after 3.42557087s: waiting for domain to come up
	I0210 14:06:32.072208  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | unable to find current IP address of domain default-k8s-diff-port-991097 in network mk-default-k8s-diff-port-991097
	I0210 14:06:32.072705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | I0210 14:06:32.072653  647926 retry.go:31] will retry after 4.016224279s: waiting for domain to come up
	I0210 14:06:36.093333  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093867  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has current primary IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.093891  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) found domain IP: 192.168.39.38
	I0210 14:06:36.093900  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserving static IP address...
	I0210 14:06:36.094346  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.094400  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | skip adding static IP to network mk-default-k8s-diff-port-991097 - found existing host DHCP lease matching {name: "default-k8s-diff-port-991097", mac: "52:54:00:41:07:a8", ip: "192.168.39.38"}
	I0210 14:06:36.094419  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) reserved static IP address 192.168.39.38 for domain default-k8s-diff-port-991097
	I0210 14:06:36.094435  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) waiting for SSH...
	I0210 14:06:36.094449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Getting to WaitForSSH function...
	I0210 14:06:36.096338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096691  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.096731  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.096845  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH client type: external
	I0210 14:06:36.096888  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Using SSH private key: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa (-rw-------)
	I0210 14:06:36.096933  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0210 14:06:36.096951  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | About to run SSH command:
	I0210 14:06:36.096961  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | exit 0
	I0210 14:06:36.224595  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | SSH cmd err, output: <nil>: 
	I0210 14:06:36.224941  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetConfigRaw
	I0210 14:06:36.225577  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.228100  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228466  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.228488  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.228753  647891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/config.json ...
	I0210 14:06:36.228952  647891 machine.go:93] provisionDockerMachine start ...
	I0210 14:06:36.228976  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:36.229205  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.231380  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231680  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.231715  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.231796  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.232000  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232158  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.232320  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.232502  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.232716  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.232730  647891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 14:06:36.348884  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 14:06:36.348927  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349191  647891 buildroot.go:166] provisioning hostname "default-k8s-diff-port-991097"
	I0210 14:06:36.349222  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.349449  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.352262  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352630  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.352660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.352854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.353039  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353197  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.353338  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.353529  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.353760  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.353774  647891 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-991097 && echo "default-k8s-diff-port-991097" | sudo tee /etc/hostname
	I0210 14:06:36.482721  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-991097
	
	I0210 14:06:36.482754  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.485405  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485793  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.485839  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.485972  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.486202  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486369  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.486526  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.486705  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.486883  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.486900  647891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-991097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-991097/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-991097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 14:06:36.609135  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 14:06:36.609166  647891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20390-580861/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-580861/.minikube}
	I0210 14:06:36.609210  647891 buildroot.go:174] setting up certificates
	I0210 14:06:36.609221  647891 provision.go:84] configureAuth start
	I0210 14:06:36.609232  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetMachineName
	I0210 14:06:36.609479  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:36.612210  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612560  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.612587  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.612688  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.614722  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615063  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.615108  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.615271  647891 provision.go:143] copyHostCerts
	I0210 14:06:36.615343  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem, removing ...
	I0210 14:06:36.615358  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem
	I0210 14:06:36.615420  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/ca.pem (1078 bytes)
	I0210 14:06:36.615522  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem, removing ...
	I0210 14:06:36.615530  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem
	I0210 14:06:36.615553  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/cert.pem (1123 bytes)
	I0210 14:06:36.615617  647891 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem, removing ...
	I0210 14:06:36.615624  647891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem
	I0210 14:06:36.615645  647891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-580861/.minikube/key.pem (1675 bytes)
	I0210 14:06:36.615712  647891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-991097 san=[127.0.0.1 192.168.39.38 default-k8s-diff-port-991097 localhost minikube]
	I0210 14:06:36.700551  647891 provision.go:177] copyRemoteCerts
	I0210 14:06:36.700630  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 14:06:36.700660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.703231  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703510  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.703552  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.703684  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.703854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.704015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.704123  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:36.791354  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 14:06:36.815844  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 14:06:36.839837  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0210 14:06:36.864605  647891 provision.go:87] duration metric: took 255.365505ms to configureAuth
	I0210 14:06:36.864653  647891 buildroot.go:189] setting minikube options for container-runtime
	I0210 14:06:36.864900  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:06:36.864986  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:36.867500  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.867819  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:36.867843  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:36.868078  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:36.868301  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868445  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:36.868556  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:36.868671  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:36.868837  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:36.868851  647891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0210 14:06:37.117664  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0210 14:06:37.117702  647891 machine.go:96] duration metric: took 888.734538ms to provisionDockerMachine
	I0210 14:06:37.117738  647891 start.go:293] postStartSetup for "default-k8s-diff-port-991097" (driver="kvm2")
	I0210 14:06:37.117752  647891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 14:06:37.117780  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.118146  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 14:06:37.118185  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.121015  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121387  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.121420  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.121678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.121877  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.122038  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.122167  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.212791  647891 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 14:06:37.217377  647891 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 14:06:37.217399  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/addons for local assets ...
	I0210 14:06:37.217455  647891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-580861/.minikube/files for local assets ...
	I0210 14:06:37.217531  647891 filesync.go:149] local asset: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem -> 5881402.pem in /etc/ssl/certs
	I0210 14:06:37.217617  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 14:06:37.229155  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:37.256944  647891 start.go:296] duration metric: took 139.188892ms for postStartSetup
	I0210 14:06:37.256995  647891 fix.go:56] duration metric: took 19.910598766s for fixHost
	I0210 14:06:37.257019  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.259761  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260061  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.260095  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.260309  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.260516  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260716  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.260828  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.261003  647891 main.go:141] libmachine: Using SSH client type: native
	I0210 14:06:37.261211  647891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0210 14:06:37.261223  647891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 14:06:37.373077  647891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739196397.346971659
	
	I0210 14:06:37.373102  647891 fix.go:216] guest clock: 1739196397.346971659
	I0210 14:06:37.373109  647891 fix.go:229] Guest: 2025-02-10 14:06:37.346971659 +0000 UTC Remote: 2025-02-10 14:06:37.256999277 +0000 UTC m=+20.051538196 (delta=89.972382ms)
	I0210 14:06:37.373144  647891 fix.go:200] guest clock delta is within tolerance: 89.972382ms
	I0210 14:06:37.373150  647891 start.go:83] releasing machines lock for "default-k8s-diff-port-991097", held for 20.026829951s
	I0210 14:06:37.373175  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.373444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:37.376107  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376494  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.376541  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.376658  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377209  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377404  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:06:37.377534  647891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 14:06:37.377589  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.377646  647891 ssh_runner.go:195] Run: cat /version.json
	I0210 14:06:37.377676  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:06:37.380159  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380444  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380557  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380597  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380714  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.380818  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:37.380854  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:37.380890  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.380991  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:06:37.381076  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381150  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:06:37.381210  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.381236  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:06:37.381376  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:06:37.461615  647891 ssh_runner.go:195] Run: systemctl --version
	I0210 14:06:37.484185  647891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0210 14:06:37.626066  647891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 14:06:37.632178  647891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 14:06:37.632269  647891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 14:06:37.649096  647891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 14:06:37.649125  647891 start.go:495] detecting cgroup driver to use...
	I0210 14:06:37.649207  647891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 14:06:37.666251  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 14:06:37.680465  647891 docker.go:217] disabling cri-docker service (if available) ...
	I0210 14:06:37.680513  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 14:06:37.694090  647891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 14:06:37.707550  647891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 14:06:37.831118  647891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 14:06:37.980607  647891 docker.go:233] disabling docker service ...
	I0210 14:06:37.980676  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 14:06:37.995113  647891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 14:06:38.009358  647891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 14:06:38.140399  647891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 14:06:38.254033  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 14:06:38.267735  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 14:06:38.286239  647891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0210 14:06:38.286326  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.296619  647891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0210 14:06:38.296675  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.306712  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.316772  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.326918  647891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 14:06:38.337280  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.347440  647891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.364350  647891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0210 14:06:38.374474  647891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 14:06:38.383773  647891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 14:06:38.383822  647891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 14:06:38.397731  647891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 14:06:38.407296  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:38.518444  647891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0210 14:06:38.609821  647891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0210 14:06:38.609897  647891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0210 14:06:38.614975  647891 start.go:563] Will wait 60s for crictl version
	I0210 14:06:38.615032  647891 ssh_runner.go:195] Run: which crictl
	I0210 14:06:38.618907  647891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 14:06:38.666752  647891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0210 14:06:38.666843  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.695436  647891 ssh_runner.go:195] Run: crio --version
	I0210 14:06:38.724290  647891 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0210 14:06:38.725705  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetIP
	I0210 14:06:38.728442  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728769  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:06:38.728804  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:06:38.728997  647891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0210 14:06:38.733358  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:38.746088  647891 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991
097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 14:06:38.746232  647891 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 14:06:38.746279  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:38.785698  647891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0210 14:06:38.785767  647891 ssh_runner.go:195] Run: which lz4
	I0210 14:06:38.790230  647891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 14:06:38.794584  647891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 14:06:38.794612  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0210 14:06:40.165093  647891 crio.go:462] duration metric: took 1.374905922s to copy over tarball
	I0210 14:06:40.165182  647891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 14:06:42.267000  647891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10178421s)
	I0210 14:06:42.267031  647891 crio.go:469] duration metric: took 2.101903432s to extract the tarball
	I0210 14:06:42.267039  647891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 14:06:42.304364  647891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 14:06:42.347839  647891 crio.go:514] all images are preloaded for cri-o runtime.
	I0210 14:06:42.347867  647891 cache_images.go:84] Images are preloaded, skipping loading
	I0210 14:06:42.347877  647891 kubeadm.go:934] updating node { 192.168.39.38 8444 v1.32.1 crio true true} ...
	I0210 14:06:42.347999  647891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-991097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 14:06:42.348081  647891 ssh_runner.go:195] Run: crio config
	I0210 14:06:42.392127  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:06:42.392155  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:06:42.392168  647891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 14:06:42.392205  647891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-991097 NodeName:default-k8s-diff-port-991097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 14:06:42.392445  647891 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-991097"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.38"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 14:06:42.392531  647891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 14:06:42.402790  647891 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 14:06:42.402866  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 14:06:42.412691  647891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0210 14:06:42.430227  647891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 14:06:42.447018  647891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0210 14:06:42.463855  647891 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0210 14:06:42.467830  647891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 14:06:42.479887  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:06:42.616347  647891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:06:42.633982  647891 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097 for IP: 192.168.39.38
	I0210 14:06:42.634012  647891 certs.go:194] generating shared ca certs ...
	I0210 14:06:42.634036  647891 certs.go:226] acquiring lock for ca certs: {Name:mke8c1aa990d3a76a836ac71745addefa2a8ba27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:42.634251  647891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key
	I0210 14:06:42.634325  647891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key
	I0210 14:06:42.634339  647891 certs.go:256] generating profile certs ...
	I0210 14:06:42.634464  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/client.key
	I0210 14:06:42.634547  647891 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key.653a5b77
	I0210 14:06:42.634633  647891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key
	I0210 14:06:42.634756  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem (1338 bytes)
	W0210 14:06:42.634790  647891 certs.go:480] ignoring /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140_empty.pem, impossibly tiny 0 bytes
	I0210 14:06:42.634804  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 14:06:42.634842  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/ca.pem (1078 bytes)
	I0210 14:06:42.634877  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/cert.pem (1123 bytes)
	I0210 14:06:42.634931  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/certs/key.pem (1675 bytes)
	I0210 14:06:42.634990  647891 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem (1708 bytes)
	I0210 14:06:42.635813  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 14:06:42.683471  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 14:06:42.717348  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 14:06:42.753582  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0210 14:06:42.786140  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0210 14:06:42.826849  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 14:06:42.854467  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 14:06:42.880065  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/default-k8s-diff-port-991097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 14:06:42.907119  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/certs/588140.pem --> /usr/share/ca-certificates/588140.pem (1338 bytes)
	I0210 14:06:42.930542  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/ssl/certs/5881402.pem --> /usr/share/ca-certificates/5881402.pem (1708 bytes)
	I0210 14:06:42.953922  647891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-580861/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 14:06:42.976830  647891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 14:06:42.993090  647891 ssh_runner.go:195] Run: openssl version
	I0210 14:06:42.999059  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/588140.pem && ln -fs /usr/share/ca-certificates/588140.pem /etc/ssl/certs/588140.pem"
	I0210 14:06:43.010187  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014640  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 12:52 /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.014690  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/588140.pem
	I0210 14:06:43.020392  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/588140.pem /etc/ssl/certs/51391683.0"
	I0210 14:06:43.031108  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5881402.pem && ln -fs /usr/share/ca-certificates/5881402.pem /etc/ssl/certs/5881402.pem"
	I0210 14:06:43.041766  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046208  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 12:52 /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.046242  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5881402.pem
	I0210 14:06:43.051895  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5881402.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 14:06:43.062587  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 14:06:43.073217  647891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077547  647891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:45 /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.077594  647891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 14:06:43.083004  647891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 14:06:43.093687  647891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 14:06:43.098273  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 14:06:43.103884  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 14:06:43.109468  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 14:06:43.114957  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 14:06:43.120594  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 14:06:43.126311  647891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 14:06:43.132094  647891 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-991097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-991097
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 14:06:43.132170  647891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0210 14:06:43.132205  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.170719  647891 cri.go:89] found id: ""
	I0210 14:06:43.170794  647891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 14:06:43.181310  647891 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 14:06:43.181333  647891 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 14:06:43.181378  647891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 14:06:43.191081  647891 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 14:06:43.191662  647891 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-991097" does not appear in /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:06:43.191931  647891 kubeconfig.go:62] /home/jenkins/minikube-integration/20390-580861/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-991097" cluster setting kubeconfig missing "default-k8s-diff-port-991097" context setting]
	I0210 14:06:43.192424  647891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:06:43.193695  647891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 14:06:43.203483  647891 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0210 14:06:43.203510  647891 kubeadm.go:1160] stopping kube-system containers ...
	I0210 14:06:43.203522  647891 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0210 14:06:43.203565  647891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 14:06:43.248106  647891 cri.go:89] found id: ""
	I0210 14:06:43.248168  647891 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 14:06:43.264683  647891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:06:43.274810  647891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:06:43.274837  647891 kubeadm.go:157] found existing configuration files:
	
	I0210 14:06:43.274893  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0210 14:06:43.284346  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:06:43.284394  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:06:43.294116  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0210 14:06:43.303692  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:06:43.303743  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:06:43.313293  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.322835  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:06:43.322893  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:06:43.332538  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0210 14:06:43.341968  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:06:43.342030  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:06:43.351997  647891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:06:43.361911  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:43.471810  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.088121  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.292411  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.357453  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:06:44.447107  647891 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:06:44.447198  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:44.947672  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.447925  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:45.947638  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.447630  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:06:46.504665  647891 api_server.go:72] duration metric: took 2.057554604s to wait for apiserver process to appear ...
	I0210 14:06:46.504702  647891 api_server.go:88] waiting for apiserver healthz status ...
	I0210 14:06:46.504729  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:46.505324  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:06:47.005003  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:52.009445  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:52.009499  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:06:57.013020  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:06:57.013089  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:02.016406  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:02.016462  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005061  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": read tcp 192.168.39.1:37208->192.168.39.38:8444: read: connection reset by peer
	I0210 14:07:07.005127  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.005704  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:10.919140  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:07:10.919450  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:07:10.919470  644218 kubeadm.go:310] 
	I0210 14:07:10.919531  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:07:10.919612  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:07:10.919643  644218 kubeadm.go:310] 
	I0210 14:07:10.919696  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:07:10.919740  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:07:10.919898  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:07:10.919908  644218 kubeadm.go:310] 
	I0210 14:07:10.920052  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:07:10.920108  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:07:10.920160  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:07:10.920171  644218 kubeadm.go:310] 
	I0210 14:07:10.920344  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:07:10.920471  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:07:10.920487  644218 kubeadm.go:310] 
	I0210 14:07:10.920637  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:07:10.920748  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:07:10.920852  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:07:10.920956  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:07:10.920968  644218 kubeadm.go:310] 
	I0210 14:07:10.921451  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:10.921558  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:07:10.921647  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0210 14:07:10.921820  644218 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0210 14:07:10.921873  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:07:11.388800  644218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:07:11.404434  644218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:07:11.415583  644218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:07:11.415609  644218 kubeadm.go:157] found existing configuration files:
	
	I0210 14:07:11.415668  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 14:07:11.425343  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:07:11.425411  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:07:11.435126  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 14:07:11.444951  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:07:11.445016  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:07:11.454675  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.463839  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:07:11.463923  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:07:11.473621  644218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 14:07:11.482802  644218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:07:11.482864  644218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:07:11.492269  644218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:07:11.706383  644218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:07:07.505081  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:07.505697  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": dial tcp 192.168.39.38:8444: connect: connection refused
	I0210 14:07:08.005039  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:13.005418  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:13.005503  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:18.006035  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:18.006088  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:23.006412  647891 api_server.go:269] stopped: https://192.168.39.38:8444/healthz: Get "https://192.168.39.38:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0210 14:07:23.006480  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:24.990987  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:24.991022  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:24.991041  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.094135  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.094175  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.094195  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.134411  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 14:07:25.134448  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 14:07:25.505023  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:25.510502  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:25.510542  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.004985  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.016527  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 14:07:26.016561  647891 api_server.go:103] status: https://192.168.39.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 14:07:26.505209  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:07:26.512830  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 200:
	ok
	I0210 14:07:26.519490  647891 api_server.go:141] control plane version: v1.32.1
	I0210 14:07:26.519519  647891 api_server.go:131] duration metric: took 40.01480806s to wait for apiserver health ...
	I0210 14:07:26.519531  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:07:26.519541  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:07:26.521665  647891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 14:07:26.523188  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 14:07:26.534793  647891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 14:07:26.555242  647891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 14:07:26.560044  647891 system_pods.go:59] 8 kube-system pods found
	I0210 14:07:26.560088  647891 system_pods.go:61] "coredns-668d6bf9bc-chvvk" [81bc9af8-1dbc-4299-9818-c5e28cd527a4] Running
	I0210 14:07:26.560096  647891 system_pods.go:61] "etcd-default-k8s-diff-port-991097" [d7991f48-f3f9-4585-9d42-8ac10fb95d65] Running
	I0210 14:07:26.560105  647891 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991097" [91a8d2ac-4127-4e49-a21e-95babe7078b1] Running
	I0210 14:07:26.560113  647891 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991097" [12fbb1be-d90f-47b2-a6e6-5d541e1c9cd3] Running
	I0210 14:07:26.560128  647891 system_pods.go:61] "kube-proxy-k94kp" [82230795-ec36-4619-a8bd-6b1520b2dcce] Running
	I0210 14:07:26.560133  647891 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991097" [98c775ff-82f9-42b1-a3ba-4a2d1830f6fc] Running
	I0210 14:07:26.560139  647891 system_pods.go:61] "metrics-server-f79f97bbb-j7gwv" [20814b8f-e1ca-4d3e-baa2-83fa85d5055e] Pending
	I0210 14:07:26.560144  647891 system_pods.go:61] "storage-provisioner" [f31ad609-ca85-4fbb-9fa7-b0fd93d6b504] Running
	I0210 14:07:26.560152  647891 system_pods.go:74] duration metric: took 4.884117ms to wait for pod list to return data ...
	I0210 14:07:26.560166  647891 node_conditions.go:102] verifying NodePressure condition ...
	I0210 14:07:26.563732  647891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 14:07:26.563765  647891 node_conditions.go:123] node cpu capacity is 2
	I0210 14:07:26.563783  647891 node_conditions.go:105] duration metric: took 3.607402ms to run NodePressure ...
	I0210 14:07:26.563811  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 14:07:26.839281  647891 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 14:07:26.842184  647891 retry.go:31] will retry after 267.442504ms: kubelet not initialised
	I0210 14:07:27.114654  647891 retry.go:31] will retry after 460.309798ms: kubelet not initialised
	I0210 14:07:27.580487  647891 retry.go:31] will retry after 468.648016ms: kubelet not initialised
	I0210 14:07:28.052957  647891 retry.go:31] will retry after 634.581788ms: kubelet not initialised
	I0210 14:07:28.692193  647891 retry.go:31] will retry after 1.585469768s: kubelet not initialised
	I0210 14:07:30.280814  647891 retry.go:31] will retry after 1.746270708s: kubelet not initialised
	I0210 14:07:32.035943  647891 kubeadm.go:739] kubelet initialised
	I0210 14:07:32.035970  647891 kubeadm.go:740] duration metric: took 5.19665458s waiting for restarted kubelet to initialise ...
	I0210 14:07:32.035982  647891 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:07:32.039938  647891 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045939  647891 pod_ready.go:93] pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.045973  647891 pod_ready.go:82] duration metric: took 2.006006864s for pod "coredns-668d6bf9bc-chvvk" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.045988  647891 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049855  647891 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.049879  647891 pod_ready.go:82] duration metric: took 3.881494ms for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.049892  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053608  647891 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:34.053629  647891 pod_ready.go:82] duration metric: took 3.729266ms for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:34.053642  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:36.060444  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:38.560369  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:41.059206  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:43.059645  647891 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:44.559464  647891 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.559497  647891 pod_ready.go:82] duration metric: took 10.505846034s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.559509  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563350  647891 pod_ready.go:93] pod "kube-proxy-k94kp" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.563377  647891 pod_ready.go:82] duration metric: took 3.859986ms for pod "kube-proxy-k94kp" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.563391  647891 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567231  647891 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:07:44.567251  647891 pod_ready.go:82] duration metric: took 3.851395ms for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:44.567263  647891 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" ...
	I0210 14:07:46.573010  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:49.073487  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:51.075217  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:53.573637  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:56.072364  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:07:58.073033  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:00.074357  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:02.574325  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:05.074157  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:07.074228  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:09.572654  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:11.573678  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:14.071655  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:16.072359  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:18.074418  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:20.572441  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:22.573381  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:25.073116  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:27.571988  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:29.573021  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:32.072192  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:34.073218  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:36.073606  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:38.573206  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:41.073455  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:43.572727  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:45.573114  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:48.072635  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:50.072982  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:52.572772  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:55.072938  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:57.073602  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:08:59.572429  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:01.572682  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:03.572760  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:06.073768  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:07.694951  644218 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0210 14:09:07.695080  644218 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0210 14:09:07.696680  644218 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0210 14:09:07.696776  644218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:09:07.696928  644218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:09:07.697091  644218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:09:07.697242  644218 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0210 14:09:07.697319  644218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:09:07.698867  644218 out.go:235]   - Generating certificates and keys ...
	I0210 14:09:07.698960  644218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:09:07.699052  644218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:09:07.699176  644218 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:09:07.699261  644218 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:09:07.699354  644218 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:09:07.699403  644218 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:09:07.699465  644218 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:09:07.699527  644218 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:09:07.699633  644218 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:09:07.699731  644218 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:09:07.699800  644218 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:09:07.699884  644218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:09:07.699960  644218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:09:07.700047  644218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:09:07.700138  644218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:09:07.700209  644218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:09:07.700322  644218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:09:07.700393  644218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:09:07.700436  644218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:09:07.700526  644218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:09:07.701917  644218 out.go:235]   - Booting up control plane ...
	I0210 14:09:07.702014  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:09:07.702107  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:09:07.702184  644218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:09:07.702300  644218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:09:07.702455  644218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0210 14:09:07.702532  644218 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0210 14:09:07.702626  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.702845  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.702940  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703134  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703216  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703373  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703435  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703588  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703650  644218 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0210 14:09:07.703819  644218 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0210 14:09:07.703826  644218 kubeadm.go:310] 
	I0210 14:09:07.703859  644218 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0210 14:09:07.703893  644218 kubeadm.go:310] 		timed out waiting for the condition
	I0210 14:09:07.703900  644218 kubeadm.go:310] 
	I0210 14:09:07.703933  644218 kubeadm.go:310] 	This error is likely caused by:
	I0210 14:09:07.703994  644218 kubeadm.go:310] 		- The kubelet is not running
	I0210 14:09:07.704123  644218 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0210 14:09:07.704131  644218 kubeadm.go:310] 
	I0210 14:09:07.704298  644218 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0210 14:09:07.704355  644218 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0210 14:09:07.704403  644218 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0210 14:09:07.704413  644218 kubeadm.go:310] 
	I0210 14:09:07.704552  644218 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0210 14:09:07.704673  644218 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0210 14:09:07.704685  644218 kubeadm.go:310] 
	I0210 14:09:07.704841  644218 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0210 14:09:07.704960  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0210 14:09:07.705074  644218 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0210 14:09:07.705199  644218 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0210 14:09:07.705210  644218 kubeadm.go:310] 
	I0210 14:09:07.705291  644218 kubeadm.go:394] duration metric: took 7m58.218613622s to StartCluster
	I0210 14:09:07.705343  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0210 14:09:07.705405  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 14:09:07.750026  644218 cri.go:89] found id: ""
	I0210 14:09:07.750054  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.750063  644218 logs.go:284] No container was found matching "kube-apiserver"
	I0210 14:09:07.750070  644218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0210 14:09:07.750136  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 14:09:07.793341  644218 cri.go:89] found id: ""
	I0210 14:09:07.793374  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.793386  644218 logs.go:284] No container was found matching "etcd"
	I0210 14:09:07.793395  644218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0210 14:09:07.793455  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 14:09:07.835496  644218 cri.go:89] found id: ""
	I0210 14:09:07.835521  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.835538  644218 logs.go:284] No container was found matching "coredns"
	I0210 14:09:07.835543  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0210 14:09:07.835620  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 14:09:07.869619  644218 cri.go:89] found id: ""
	I0210 14:09:07.869655  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.869663  644218 logs.go:284] No container was found matching "kube-scheduler"
	I0210 14:09:07.869669  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0210 14:09:07.869735  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 14:09:07.927211  644218 cri.go:89] found id: ""
	I0210 14:09:07.927243  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.927253  644218 logs.go:284] No container was found matching "kube-proxy"
	I0210 14:09:07.927261  644218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 14:09:07.927331  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 14:09:07.966320  644218 cri.go:89] found id: ""
	I0210 14:09:07.966355  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.966365  644218 logs.go:284] No container was found matching "kube-controller-manager"
	I0210 14:09:07.966374  644218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0210 14:09:07.966437  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 14:09:07.999268  644218 cri.go:89] found id: ""
	I0210 14:09:07.999302  644218 logs.go:282] 0 containers: []
	W0210 14:09:07.999313  644218 logs.go:284] No container was found matching "kindnet"
	I0210 14:09:07.999321  644218 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 14:09:07.999389  644218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 14:09:08.039339  644218 cri.go:89] found id: ""
	I0210 14:09:08.039371  644218 logs.go:282] 0 containers: []
	W0210 14:09:08.039380  644218 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0210 14:09:08.039391  644218 logs.go:123] Gathering logs for kubelet ...
	I0210 14:09:08.039404  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 14:09:08.091644  644218 logs.go:123] Gathering logs for dmesg ...
	I0210 14:09:08.091675  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 14:09:08.105318  644218 logs.go:123] Gathering logs for describe nodes ...
	I0210 14:09:08.105346  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0210 14:09:08.182104  644218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0210 14:09:08.182127  644218 logs.go:123] Gathering logs for CRI-O ...
	I0210 14:09:08.182140  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0210 14:09:08.287929  644218 logs.go:123] Gathering logs for container status ...
	I0210 14:09:08.287974  644218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0210 14:09:08.331764  644218 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0210 14:09:08.331884  644218 out.go:270] * 
	W0210 14:09:08.332053  644218 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.332079  644218 out.go:270] * 
	W0210 14:09:08.333029  644218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 14:09:08.336162  644218 out.go:201] 
	W0210 14:09:08.337200  644218 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0210 14:09:08.337269  644218 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0210 14:09:08.337316  644218 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0210 14:09:08.339083  644218 out.go:201] 
	I0210 14:09:08.574570  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:11.072543  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:13.573301  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:15.573572  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:18.073259  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:20.075503  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:22.573109  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:25.073412  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:27.573006  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:29.573328  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:31.574361  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:34.072762  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:36.574072  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:39.073539  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:41.573025  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:43.573580  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:46.072848  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:48.072967  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:50.573107  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:53.073370  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:55.573158  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:09:58.072342  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:00.072754  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:02.074034  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:04.074722  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:06.572250  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:08.572718  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:10.573231  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:12.573418  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:15.073637  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:17.573333  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:20.072833  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:22.572801  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:24.576464  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:27.073032  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:29.573284  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:32.073083  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:34.577658  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:37.072763  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:39.571996  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:41.572345  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:43.574031  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:46.073658  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:48.573611  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:51.072756  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:53.073565  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:55.572482  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:10:57.572577  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:00.072828  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:02.572873  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:04.573206  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:06.573564  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:09.072900  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:11.073012  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:13.073099  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:15.572178  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:17.572235  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:19.573626  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:22.072581  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:24.072885  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:26.073024  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:28.073396  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:30.573530  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:32.574839  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:35.073176  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:37.573717  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:40.072207  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:42.073250  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:44.073336  647891 pod_ready.go:103] pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace has status "Ready":"False"
	I0210 14:11:44.567861  647891 pod_ready.go:82] duration metric: took 4m0.000569197s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" ...
	E0210 14:11:44.567904  647891 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-j7gwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0210 14:11:44.567935  647891 pod_ready.go:39] duration metric: took 4m12.5319365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:11:44.567975  647891 kubeadm.go:597] duration metric: took 5m1.386634957s to restartPrimaryControlPlane
	W0210 14:11:44.568092  647891 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0210 14:11:44.568135  647891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0210 14:12:12.327344  647891 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.759174157s)
	I0210 14:12:12.327426  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:12:12.356706  647891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 14:12:12.370489  647891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 14:12:12.389582  647891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 14:12:12.389606  647891 kubeadm.go:157] found existing configuration files:
	
	I0210 14:12:12.389665  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0210 14:12:12.406178  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 14:12:12.406240  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 14:12:12.416269  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0210 14:12:12.425666  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 14:12:12.425722  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 14:12:12.442382  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0210 14:12:12.451653  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 14:12:12.451700  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 14:12:12.461152  647891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0210 14:12:12.470257  647891 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 14:12:12.470309  647891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 14:12:12.479927  647891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 14:12:12.526468  647891 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 14:12:12.526533  647891 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 14:12:12.646027  647891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 14:12:12.646189  647891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 14:12:12.646291  647891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 14:12:12.657926  647891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 14:12:12.660818  647891 out.go:235]   - Generating certificates and keys ...
	I0210 14:12:12.660928  647891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 14:12:12.661022  647891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 14:12:12.661164  647891 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 14:12:12.661261  647891 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0210 14:12:12.661358  647891 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0210 14:12:12.661464  647891 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0210 14:12:12.661568  647891 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0210 14:12:12.661650  647891 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0210 14:12:12.661748  647891 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 14:12:12.661862  647891 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 14:12:12.661917  647891 kubeadm.go:310] [certs] Using the existing "sa" key
	I0210 14:12:12.661998  647891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 14:12:12.780092  647891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 14:12:12.997667  647891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 14:12:13.165032  647891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 14:12:13.297324  647891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 14:12:13.407861  647891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 14:12:13.408365  647891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 14:12:13.411477  647891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 14:12:13.413309  647891 out.go:235]   - Booting up control plane ...
	I0210 14:12:13.413450  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 14:12:13.413547  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 14:12:13.415050  647891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 14:12:13.433081  647891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 14:12:13.441419  647891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 14:12:13.441482  647891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 14:12:13.567261  647891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 14:12:13.567429  647891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 14:12:14.080023  647891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 512.899029ms
	I0210 14:12:14.080151  647891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 14:12:19.082293  647891 kubeadm.go:310] [api-check] The API server is healthy after 5.00209227s
	I0210 14:12:19.097053  647891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 14:12:19.128233  647891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 14:12:19.181291  647891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 14:12:19.181616  647891 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-991097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 14:12:19.194310  647891 kubeadm.go:310] [bootstrap-token] Using token: mnjk32.fgjackbr8f6xpsoe
	I0210 14:12:19.195599  647891 out.go:235]   - Configuring RBAC rules ...
	I0210 14:12:19.195756  647891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 14:12:19.207224  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 14:12:19.218283  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 14:12:19.223717  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 14:12:19.236200  647891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 14:12:19.244351  647891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 14:12:19.488623  647891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 14:12:19.926025  647891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 14:12:20.490610  647891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 14:12:20.490635  647891 kubeadm.go:310] 
	I0210 14:12:20.490702  647891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 14:12:20.490708  647891 kubeadm.go:310] 
	I0210 14:12:20.490797  647891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 14:12:20.490805  647891 kubeadm.go:310] 
	I0210 14:12:20.490826  647891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 14:12:20.490883  647891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 14:12:20.490951  647891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 14:12:20.490959  647891 kubeadm.go:310] 
	I0210 14:12:20.491041  647891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 14:12:20.491053  647891 kubeadm.go:310] 
	I0210 14:12:20.491096  647891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 14:12:20.491108  647891 kubeadm.go:310] 
	I0210 14:12:20.491216  647891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 14:12:20.491344  647891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 14:12:20.491441  647891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 14:12:20.491451  647891 kubeadm.go:310] 
	I0210 14:12:20.491568  647891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 14:12:20.491678  647891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 14:12:20.491690  647891 kubeadm.go:310] 
	I0210 14:12:20.491762  647891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token mnjk32.fgjackbr8f6xpsoe \
	I0210 14:12:20.491847  647891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 \
	I0210 14:12:20.491879  647891 kubeadm.go:310] 	--control-plane 
	I0210 14:12:20.491889  647891 kubeadm.go:310] 
	I0210 14:12:20.491958  647891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 14:12:20.491968  647891 kubeadm.go:310] 
	I0210 14:12:20.492034  647891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token mnjk32.fgjackbr8f6xpsoe \
	I0210 14:12:20.492133  647891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cda6234c21caed8b2c457fd9fd9a427fa0fd7aae97fbc146e2dc2d4939983fe9 
	I0210 14:12:20.493401  647891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 14:12:20.493482  647891 cni.go:84] Creating CNI manager for ""
	I0210 14:12:20.493514  647891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 14:12:20.495183  647891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0210 14:12:20.496353  647891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0210 14:12:20.509131  647891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0210 14:12:20.529282  647891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 14:12:20.529370  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:20.529403  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-991097 minikube.k8s.io/updated_at=2025_02_10T14_12_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=default-k8s-diff-port-991097 minikube.k8s.io/primary=true
	I0210 14:12:20.544926  647891 ops.go:34] apiserver oom_adj: -16
	I0210 14:12:20.760939  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:21.261178  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:21.761401  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:22.262028  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:22.761178  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.261587  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.761361  647891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 14:12:23.872332  647891 kubeadm.go:1113] duration metric: took 3.343041771s to wait for elevateKubeSystemPrivileges
	I0210 14:12:23.872373  647891 kubeadm.go:394] duration metric: took 5m40.740283252s to StartCluster
	I0210 14:12:23.872399  647891 settings.go:142] acquiring lock: {Name:mk7daa7e5390489a50205707c4b69542e21eb74b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:12:23.872537  647891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 14:12:23.873372  647891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-580861/kubeconfig: {Name:mk6bb5290824b25ea1cddb838f7c832a7edd76ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 14:12:23.873648  647891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0210 14:12:23.873753  647891 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 14:12:23.873853  647891 config.go:182] Loaded profile config "default-k8s-diff-port-991097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 14:12:23.873887  647891 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873905  647891 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873913  647891 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873922  647891 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-991097"
	I0210 14:12:23.873927  647891 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.873938  647891 addons.go:247] addon dashboard should already be in state true
	I0210 14:12:23.873939  647891 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.873952  647891 addons.go:247] addon metrics-server should already be in state true
	I0210 14:12:23.873952  647891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-991097"
	I0210 14:12:23.873979  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.873988  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.873912  647891 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.874043  647891 addons.go:247] addon storage-provisioner should already be in state true
	I0210 14:12:23.874086  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.874363  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874413  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874364  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874366  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874488  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.874496  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874547  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874552  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.874949  647891 out.go:177] * Verifying Kubernetes components...
	I0210 14:12:23.876130  647891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 14:12:23.890806  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0210 14:12:23.890815  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0210 14:12:23.891456  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.891467  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.891547  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0210 14:12:23.892086  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892113  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.892140  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.892234  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892260  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.892620  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.892677  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.892756  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.892784  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.893133  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.893208  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893259  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.893279  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893318  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.893670  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.893725  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.895428  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0210 14:12:23.895965  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.896617  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.896644  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.897105  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.897324  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.900256  647891 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-991097"
	W0210 14:12:23.900294  647891 addons.go:247] addon default-storageclass should already be in state true
	I0210 14:12:23.900324  647891 host.go:66] Checking if "default-k8s-diff-port-991097" exists ...
	I0210 14:12:23.900640  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.900680  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.912795  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0210 14:12:23.913054  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0210 14:12:23.913323  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.913658  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.913858  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.913884  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.914232  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.914252  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.914320  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.914363  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0210 14:12:23.914498  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.914699  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.914829  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.915120  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.915140  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.915307  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.915673  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.915884  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.916520  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.916649  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0210 14:12:23.917160  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.917660  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.917859  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.917986  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.918005  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.918315  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.918450  647891 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 14:12:23.918704  647891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 14:12:23.918848  647891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 14:12:23.919214  647891 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 14:12:23.919222  647891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 14:12:23.920579  647891 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 14:12:23.920587  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 14:12:23.920608  647891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 14:12:23.920624  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.920692  647891 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 14:12:23.920705  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 14:12:23.920723  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.921692  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 14:12:23.921732  647891 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 14:12:23.921750  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.924480  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925009  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.925039  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925163  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.925210  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.925419  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.925609  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.926027  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.926047  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.926098  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.926117  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926143  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926326  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.926439  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.926472  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.926485  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.926693  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.926752  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.927013  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.927142  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.927264  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:23.936911  647891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0210 14:12:23.937355  647891 main.go:141] libmachine: () Calling .GetVersion
	I0210 14:12:23.937894  647891 main.go:141] libmachine: Using API Version  1
	I0210 14:12:23.937916  647891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 14:12:23.938211  647891 main.go:141] libmachine: () Calling .GetMachineName
	I0210 14:12:23.938418  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetState
	I0210 14:12:23.939948  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .DriverName
	I0210 14:12:23.940165  647891 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 14:12:23.940182  647891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 14:12:23.940201  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHHostname
	I0210 14:12:23.943037  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.943450  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:07:a8", ip: ""} in network mk-default-k8s-diff-port-991097: {Iface:virbr4 ExpiryTime:2025-02-10 15:06:29 +0000 UTC Type:0 Mac:52:54:00:41:07:a8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:default-k8s-diff-port-991097 Clientid:01:52:54:00:41:07:a8}
	I0210 14:12:23.943481  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | domain default-k8s-diff-port-991097 has defined IP address 192.168.39.38 and MAC address 52:54:00:41:07:a8 in network mk-default-k8s-diff-port-991097
	I0210 14:12:23.943582  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHPort
	I0210 14:12:23.943751  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHKeyPath
	I0210 14:12:23.943873  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .GetSSHUsername
	I0210 14:12:23.944008  647891 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/default-k8s-diff-port-991097/id_rsa Username:docker}
	I0210 14:12:24.052454  647891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 14:12:24.072649  647891 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-991097" to be "Ready" ...
	I0210 14:12:24.096932  647891 node_ready.go:49] node "default-k8s-diff-port-991097" has status "Ready":"True"
	I0210 14:12:24.096960  647891 node_ready.go:38] duration metric: took 24.264753ms for node "default-k8s-diff-port-991097" to be "Ready" ...
	I0210 14:12:24.096970  647891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:12:24.099847  647891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:24.138048  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 14:12:24.138085  647891 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 14:12:24.138256  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 14:12:24.141886  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 14:12:24.166242  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 14:12:24.166277  647891 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 14:12:24.206775  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 14:12:24.206801  647891 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 14:12:24.226641  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 14:12:24.226667  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 14:12:24.245183  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 14:12:24.245208  647891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0210 14:12:24.278451  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 14:12:24.278493  647891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 14:12:24.306222  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 14:12:24.306256  647891 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0210 14:12:24.342596  647891 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 14:12:24.342630  647891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 14:12:24.399654  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.399689  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.400136  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.400160  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.400175  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.400184  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.400477  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.400499  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.400506  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:24.418656  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:24.418678  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:24.418964  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:24.418997  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:24.419003  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:24.435724  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 14:12:24.435747  647891 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0210 14:12:24.449333  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 14:12:24.528025  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 14:12:24.528058  647891 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 14:12:24.616254  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 14:12:24.616294  647891 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 14:12:24.710068  647891 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 14:12:24.710108  647891 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 14:12:24.828857  647891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 14:12:25.153728  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.011798056s)
	I0210 14:12:25.153806  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.153823  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.154144  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.154171  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.154187  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.154203  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.154213  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.154482  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.154482  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.154508  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587013  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.137616057s)
	I0210 14:12:25.587092  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.587120  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.587435  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.587489  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587532  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.587586  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:25.587599  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:25.587870  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:25.587920  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:25.587928  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:25.587937  647891 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-991097"
	I0210 14:12:26.117299  647891 pod_ready.go:103] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:12:27.032443  647891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.203528578s)
	I0210 14:12:27.032495  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:27.032511  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:27.032867  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:27.032888  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:27.032896  647891 main.go:141] libmachine: Making call to close driver server
	I0210 14:12:27.032901  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) Calling .Close
	I0210 14:12:27.033216  647891 main.go:141] libmachine: Successfully made call to close driver server
	I0210 14:12:27.033248  647891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0210 14:12:27.033245  647891 main.go:141] libmachine: (default-k8s-diff-port-991097) DBG | Closing plugin on server side
	I0210 14:12:27.035191  647891 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-991097 addons enable metrics-server
	
	I0210 14:12:27.036488  647891 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0210 14:12:27.037799  647891 addons.go:514] duration metric: took 3.16405216s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0210 14:12:28.105934  647891 pod_ready.go:93] pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.105960  647891 pod_ready.go:82] duration metric: took 4.006089526s for pod "etcd-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.105971  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.111533  647891 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.111558  647891 pod_ready.go:82] duration metric: took 5.581237ms for pod "kube-apiserver-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.111568  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.116636  647891 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:28.116668  647891 pod_ready.go:82] duration metric: took 5.091992ms for pod "kube-controller-manager-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:28.116681  647891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:30.123433  647891 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"False"
	I0210 14:12:31.624379  647891 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace has status "Ready":"True"
	I0210 14:12:31.624406  647891 pod_ready.go:82] duration metric: took 3.507715801s for pod "kube-scheduler-default-k8s-diff-port-991097" in "kube-system" namespace to be "Ready" ...
	I0210 14:12:31.624414  647891 pod_ready.go:39] duration metric: took 7.527433406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 14:12:31.624430  647891 api_server.go:52] waiting for apiserver process to appear ...
	I0210 14:12:31.624479  647891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 14:12:31.647536  647891 api_server.go:72] duration metric: took 7.773850883s to wait for apiserver process to appear ...
	I0210 14:12:31.647560  647891 api_server.go:88] waiting for apiserver healthz status ...
	I0210 14:12:31.647580  647891 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8444/healthz ...
	I0210 14:12:31.652686  647891 api_server.go:279] https://192.168.39.38:8444/healthz returned 200:
	ok
	I0210 14:12:31.653569  647891 api_server.go:141] control plane version: v1.32.1
	I0210 14:12:31.653594  647891 api_server.go:131] duration metric: took 6.025911ms to wait for apiserver health ...
	I0210 14:12:31.653604  647891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 14:12:31.656891  647891 system_pods.go:59] 9 kube-system pods found
	I0210 14:12:31.656923  647891 system_pods.go:61] "coredns-668d6bf9bc-28wch" [927d1cd9-ae9d-4278-84d5-5bd3239cd786] Running
	I0210 14:12:31.656928  647891 system_pods.go:61] "coredns-668d6bf9bc-nmbcp" [2c1a705f-ab6a-41ef-a4d9-50e3ca250ed9] Running
	I0210 14:12:31.656931  647891 system_pods.go:61] "etcd-default-k8s-diff-port-991097" [b0b539ce-5f91-40a5-8d70-0a75dfe2ed6a] Running
	I0210 14:12:31.656935  647891 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-991097" [671ba619-e5e2-4907-a13d-2c67be54a92e] Running
	I0210 14:12:31.656938  647891 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-991097" [8c8cb8cc-e70f-4f8f-8f9b-05c43759c492] Running
	I0210 14:12:31.656941  647891 system_pods.go:61] "kube-proxy-q4hfw" [4be41fa0-22f6-412b-87ef-c7348699fc31] Running
	I0210 14:12:31.656947  647891 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-991097" [ea3d0af7-156b-444d-967e-67226742cbe7] Running
	I0210 14:12:31.656957  647891 system_pods.go:61] "metrics-server-f79f97bbb-88dls" [61895ed1-ecb5-4d33-94bd-1c8c73f7ed51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 14:12:31.656967  647891 system_pods.go:61] "storage-provisioner" [5684753f-8a90-4d05-9562-5dd0d567de4a] Running
	I0210 14:12:31.656976  647891 system_pods.go:74] duration metric: took 3.364979ms to wait for pod list to return data ...
	I0210 14:12:31.656984  647891 default_sa.go:34] waiting for default service account to be created ...
	I0210 14:12:31.665386  647891 default_sa.go:45] found service account: "default"
	I0210 14:12:31.665409  647891 default_sa.go:55] duration metric: took 8.414491ms for default service account to be created ...
	I0210 14:12:31.665416  647891 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 14:12:31.668407  647891 system_pods.go:86] 9 kube-system pods found
	I0210 14:12:31.668429  647891 system_pods.go:89] "coredns-668d6bf9bc-28wch" [927d1cd9-ae9d-4278-84d5-5bd3239cd786] Running
	I0210 14:12:31.668435  647891 system_pods.go:89] "coredns-668d6bf9bc-nmbcp" [2c1a705f-ab6a-41ef-a4d9-50e3ca250ed9] Running
	I0210 14:12:31.668439  647891 system_pods.go:89] "etcd-default-k8s-diff-port-991097" [b0b539ce-5f91-40a5-8d70-0a75dfe2ed6a] Running
	I0210 14:12:31.668443  647891 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-991097" [671ba619-e5e2-4907-a13d-2c67be54a92e] Running
	I0210 14:12:31.668447  647891 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-991097" [8c8cb8cc-e70f-4f8f-8f9b-05c43759c492] Running
	I0210 14:12:31.668450  647891 system_pods.go:89] "kube-proxy-q4hfw" [4be41fa0-22f6-412b-87ef-c7348699fc31] Running
	I0210 14:12:31.668453  647891 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-991097" [ea3d0af7-156b-444d-967e-67226742cbe7] Running
	I0210 14:12:31.668459  647891 system_pods.go:89] "metrics-server-f79f97bbb-88dls" [61895ed1-ecb5-4d33-94bd-1c8c73f7ed51] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 14:12:31.668463  647891 system_pods.go:89] "storage-provisioner" [5684753f-8a90-4d05-9562-5dd0d567de4a] Running
	I0210 14:12:31.668472  647891 system_pods.go:126] duration metric: took 3.049778ms to wait for k8s-apps to be running ...
	I0210 14:12:31.668480  647891 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 14:12:31.668529  647891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 14:12:31.715692  647891 system_svc.go:56] duration metric: took 47.199919ms WaitForService to wait for kubelet
	I0210 14:12:31.715721  647891 kubeadm.go:582] duration metric: took 7.842039698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 14:12:31.715745  647891 node_conditions.go:102] verifying NodePressure condition ...
	I0210 14:12:31.718389  647891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 14:12:31.718422  647891 node_conditions.go:123] node cpu capacity is 2
	I0210 14:12:31.718442  647891 node_conditions.go:105] duration metric: took 2.692752ms to run NodePressure ...
	I0210 14:12:31.718453  647891 start.go:241] waiting for startup goroutines ...
	I0210 14:12:31.718463  647891 start.go:246] waiting for cluster config update ...
	I0210 14:12:31.718473  647891 start.go:255] writing updated cluster config ...
	I0210 14:12:31.718739  647891 ssh_runner.go:195] Run: rm -f paused
	I0210 14:12:31.773099  647891 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 14:12:31.774883  647891 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-991097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.447459595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197469447438639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8e4b466-c0df-4f0e-9a5d-04de353c8657 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.447948393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c99ffef7-4b42-4dba-b5e8-277e2394ce2d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.448003111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c99ffef7-4b42-4dba-b5e8-277e2394ce2d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.448040309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c99ffef7-4b42-4dba-b5e8-277e2394ce2d name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.481105829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afa4dab6-6ff3-4ef9-bc55-d2f169930f32 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.481194918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afa4dab6-6ff3-4ef9-bc55-d2f169930f32 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.482325509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87832a9d-5005-4d95-9322-63aec1eb37fd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.482783966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197469482756585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87832a9d-5005-4d95-9322-63aec1eb37fd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.483330247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bce4883-8e39-43eb-a264-5b00bdef4105 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.483402821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bce4883-8e39-43eb-a264-5b00bdef4105 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.483443078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2bce4883-8e39-43eb-a264-5b00bdef4105 name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.514803716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bf9ebd3-173b-4650-872d-ca81b393b577 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.514874572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bf9ebd3-173b-4650-872d-ca81b393b577 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.515861063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33deb14a-266b-40e7-aac8-5b532f88ce49 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.516236904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197469516218454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33deb14a-266b-40e7-aac8-5b532f88ce49 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.516714544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2ed6cca-a088-4462-b207-622cdab0b4bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.516762411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2ed6cca-a088-4462-b207-622cdab0b4bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.516797835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d2ed6cca-a088-4462-b207-622cdab0b4bc name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.552118447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63360dff-c12e-44f1-bc8c-a799d8f6d3e2 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.552207371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63360dff-c12e-44f1-bc8c-a799d8f6d3e2 name=/runtime.v1.RuntimeService/Version
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.553393471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4dd77ba-26c4-446a-8dd7-492a1781b634 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.553820103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739197469553789016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4dd77ba-26c4-446a-8dd7-492a1781b634 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.554323226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ab73613-3ed2-4974-891a-c40caa0734cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.554380138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ab73613-3ed2-4974-891a-c40caa0734cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 10 14:24:29 old-k8s-version-643105 crio[627]: time="2025-02-10 14:24:29.554417997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6ab73613-3ed2-4974-891a-c40caa0734cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb10 14:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053008] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041885] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.089243] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.827394] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.420700] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 14:01] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.115834] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.165556] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132361] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.253937] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.572683] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.063864] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.037525] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +14.230154] kauditd_printk_skb: 46 callbacks suppressed
	[Feb10 14:05] systemd-fstab-generator[4994]: Ignoring "noauto" option for root device
	[Feb10 14:07] systemd-fstab-generator[5274]: Ignoring "noauto" option for root device
	[  +0.063648] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:24:29 up 23 min,  0 users,  load average: 0.08, 0.05, 0.04
	Linux old-k8s-version-643105 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b7d020, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000a57d70, 0x24, 0x0, ...)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: net.(*Dialer).DialContext(0xc000426720, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000a57d70, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000998a80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000a57d70, 0x24, 0x60, 0x7fcbec55ffe8, 0x118, ...)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: net/http.(*Transport).dial(0xc0001d77c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000a57d70, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: net/http.(*Transport).dialConn(0xc0001d77c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc0003a8600, 0x5, 0xc000a57d70, 0x24, 0x0, 0xc0006a18c0, ...)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: net/http.(*Transport).dialConnFor(0xc0001d77c0, 0xc000a4b550)
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]: created by net/http.(*Transport).queueForDial
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7144]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 10 14:24:28 old-k8s-version-643105 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 10 14:24:28 old-k8s-version-643105 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 10 14:24:28 old-k8s-version-643105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 180.
	Feb 10 14:24:28 old-k8s-version-643105 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 10 14:24:28 old-k8s-version-643105 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7162]: I0210 14:24:28.871583    7162 server.go:416] Version: v1.20.0
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7162]: I0210 14:24:28.871931    7162 server.go:837] Client rotation is on, will bootstrap in background
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7162]: I0210 14:24:28.874862    7162 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7162]: I0210 14:24:28.876989    7162 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 10 14:24:28 old-k8s-version-643105 kubelet[7162]: W0210 14:24:28.877024    7162 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 2 (229.312198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-643105" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (378.73s)

                                                
                                    

Test pass (270/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 13.93
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 85.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 141.33
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 19.93
37 TestAddons/parallel/InspektorGadget 11.85
38 TestAddons/parallel/MetricsServer 6.86
40 TestAddons/parallel/CSI 60.57
41 TestAddons/parallel/Headlamp 19.17
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 57.09
44 TestAddons/parallel/NvidiaDevicePlugin 6.55
45 TestAddons/parallel/Yakd 11.85
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 85.3
49 TestCertExpiration 322.45
51 TestForceSystemdFlag 110.32
52 TestForceSystemdEnv 44.31
54 TestKVMDriverInstallOrUpdate 4.78
58 TestErrorSpam/setup 42.34
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.8
61 TestErrorSpam/pause 1.65
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 5.32
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.74
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 48.76
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
75 TestFunctional/serial/CacheCmd/cache/add_local 2.19
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 40.93
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.08
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 19.45
91 TestFunctional/parallel/DryRun 0.34
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.2
97 TestFunctional/parallel/ServiceCmdConnect 7.52
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 44.73
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.46
103 TestFunctional/parallel/MySQL 28.85
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.81
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
113 TestFunctional/parallel/License 1.08
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
116 TestFunctional/parallel/MountCmd/any-port 10.84
117 TestFunctional/parallel/ProfileCmd/profile_list 0.46
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
119 TestFunctional/parallel/ServiceCmd/List 0.52
120 TestFunctional/parallel/MountCmd/specific-port 1.55
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
123 TestFunctional/parallel/ServiceCmd/Format 0.28
124 TestFunctional/parallel/ServiceCmd/URL 0.34
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.7
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
142 TestFunctional/parallel/ImageCommands/Setup 2.28
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.23
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.46
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 205.43
161 TestMultiControlPlane/serial/DeployApp 7.17
162 TestMultiControlPlane/serial/PingHostFromPods 1.18
163 TestMultiControlPlane/serial/AddWorkerNode 58.62
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
166 TestMultiControlPlane/serial/CopyFile 13.31
167 TestMultiControlPlane/serial/StopSecondaryNode 91.68
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
169 TestMultiControlPlane/serial/RestartSecondaryNode 58.75
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 443.26
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.7
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
174 TestMultiControlPlane/serial/StopCluster 272.77
175 TestMultiControlPlane/serial/RestartCluster 103.01
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
177 TestMultiControlPlane/serial/AddSecondaryNode 81.55
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
182 TestJSONOutput/start/Command 50.2
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.76
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.67
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.35
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 97.88
214 TestMountStart/serial/StartWithMountFirst 28.73
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 29.64
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.91
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 1.31
221 TestMountStart/serial/RestartStopped 24.94
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 119.02
226 TestMultiNode/serial/DeployApp2Nodes 5.94
227 TestMultiNode/serial/PingHostFrom2Pods 0.83
228 TestMultiNode/serial/AddNode 50.96
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.6
231 TestMultiNode/serial/CopyFile 7.57
232 TestMultiNode/serial/StopNode 2.43
233 TestMultiNode/serial/StartAfterStop 41.22
234 TestMultiNode/serial/RestartKeepsNodes 347.44
235 TestMultiNode/serial/DeleteNode 2.7
236 TestMultiNode/serial/StopMultiNode 181.89
237 TestMultiNode/serial/RestartMultiNode 118.19
238 TestMultiNode/serial/ValidateNameConflict 47.03
245 TestScheduledStopUnix 115.41
249 TestRunningBinaryUpgrade 171.17
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 94.52
263 TestNetworkPlugins/group/false 3.15
267 TestNoKubernetes/serial/StartWithStopK8s 68.14
268 TestNoKubernetes/serial/Start 48.2
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
270 TestNoKubernetes/serial/ProfileList 1.65
271 TestNoKubernetes/serial/Stop 1.35
272 TestNoKubernetes/serial/StartNoArgs 45.52
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
274 TestStoppedBinaryUpgrade/Setup 3.42
275 TestStoppedBinaryUpgrade/Upgrade 103.56
284 TestPause/serial/Start 59.65
286 TestNetworkPlugins/group/auto/Start 59.73
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
288 TestNetworkPlugins/group/kindnet/Start 92.81
289 TestNetworkPlugins/group/auto/KubeletFlags 0.41
290 TestNetworkPlugins/group/auto/NetCatPod 10.42
291 TestNetworkPlugins/group/auto/DNS 0.17
292 TestNetworkPlugins/group/auto/Localhost 0.14
293 TestNetworkPlugins/group/auto/HairPin 0.14
294 TestNetworkPlugins/group/calico/Start 94.43
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
297 TestNetworkPlugins/group/kindnet/NetCatPod 12.24
298 TestNetworkPlugins/group/kindnet/DNS 0.19
299 TestNetworkPlugins/group/kindnet/Localhost 0.2
300 TestNetworkPlugins/group/kindnet/HairPin 0.17
301 TestNetworkPlugins/group/custom-flannel/Start 81.42
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.24
304 TestNetworkPlugins/group/calico/NetCatPod 13.3
305 TestNetworkPlugins/group/enable-default-cni/Start 66.59
306 TestNetworkPlugins/group/calico/DNS 0.16
307 TestNetworkPlugins/group/calico/Localhost 0.14
308 TestNetworkPlugins/group/calico/HairPin 0.14
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
311 TestNetworkPlugins/group/flannel/Start 74.84
312 TestNetworkPlugins/group/custom-flannel/DNS 0.18
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
315 TestNetworkPlugins/group/bridge/Start 62.61
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
325 TestNetworkPlugins/group/flannel/NetCatPod 11.28
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
327 TestNetworkPlugins/group/bridge/NetCatPod 10.24
328 TestNetworkPlugins/group/flannel/DNS 0.16
329 TestNetworkPlugins/group/flannel/Localhost 0.12
330 TestNetworkPlugins/group/flannel/HairPin 0.13
331 TestNetworkPlugins/group/bridge/DNS 0.17
332 TestNetworkPlugins/group/bridge/Localhost 0.14
333 TestNetworkPlugins/group/bridge/HairPin 0.14
335 TestStartStop/group/no-preload/serial/FirstStart 75.6
337 TestStartStop/group/embed-certs/serial/FirstStart 76.13
338 TestStartStop/group/no-preload/serial/DeployApp 10.29
339 TestStartStop/group/embed-certs/serial/DeployApp 10.28
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
341 TestStartStop/group/no-preload/serial/Stop 91.01
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
343 TestStartStop/group/embed-certs/serial/Stop 91.47
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
345 TestStartStop/group/no-preload/serial/SecondStart 349.25
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
347 TestStartStop/group/embed-certs/serial/SecondStart 319.28
350 TestStartStop/group/old-k8s-version/serial/Stop 2.57
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
358 TestStartStop/group/embed-certs/serial/Pause 2.97
360 TestStartStop/group/newest-cni/serial/FirstStart 60.04
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
364 TestStartStop/group/no-preload/serial/Pause 2.98
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.05
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
370 TestStartStop/group/newest-cni/serial/Stop 10.61
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/newest-cni/serial/SecondStart 37.85
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
376 TestStartStop/group/newest-cni/serial/Pause 2.37
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 374.92
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.72
x
+
TestDownloadOnly/v1.20.0/json-events (26.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-754359 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-754359 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.367945508s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 12:44:36.033172  588140 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0210 12:44:36.033283  588140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-754359
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-754359: exit status 85 (66.239776ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-754359 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |          |
	|         | -p download-only-754359        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:44:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:44:09.709904  588152 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:44:09.710048  588152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:09.710078  588152 out.go:358] Setting ErrFile to fd 2...
	I0210 12:44:09.710092  588152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:09.710291  588152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	W0210 12:44:09.710430  588152 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20390-580861/.minikube/config/config.json: open /home/jenkins/minikube-integration/20390-580861/.minikube/config/config.json: no such file or directory
	I0210 12:44:09.711047  588152 out.go:352] Setting JSON to true
	I0210 12:44:09.712132  588152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8795,"bootTime":1739182655,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:44:09.712248  588152 start.go:139] virtualization: kvm guest
	I0210 12:44:09.714628  588152 out.go:97] [download-only-754359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:44:09.714776  588152 notify.go:220] Checking for updates...
	W0210 12:44:09.714826  588152 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 12:44:09.716095  588152 out.go:169] MINIKUBE_LOCATION=20390
	I0210 12:44:09.717350  588152 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:44:09.718689  588152 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:44:09.719899  588152 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:44:09.721216  588152 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 12:44:09.723414  588152 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 12:44:09.723671  588152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:44:09.755908  588152 out.go:97] Using the kvm2 driver based on user configuration
	I0210 12:44:09.755939  588152 start.go:297] selected driver: kvm2
	I0210 12:44:09.755946  588152 start.go:901] validating driver "kvm2" against <nil>
	I0210 12:44:09.756297  588152 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:09.756409  588152 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 12:44:09.772108  588152 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 12:44:09.772164  588152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:44:09.772755  588152 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 12:44:09.772902  588152 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:44:09.772933  588152 cni.go:84] Creating CNI manager for ""
	I0210 12:44:09.772982  588152 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:44:09.772994  588152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 12:44:09.773084  588152 start.go:340] cluster config:
	{Name:download-only-754359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-754359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:44:09.773272  588152 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:09.775042  588152 out.go:97] Downloading VM boot image ...
	I0210 12:44:09.775083  588152 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20390-580861/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0210 12:44:20.393133  588152 out.go:97] Starting "download-only-754359" primary control-plane node in "download-only-754359" cluster
	I0210 12:44:20.393167  588152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 12:44:20.501154  588152 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 12:44:20.501199  588152 cache.go:56] Caching tarball of preloaded images
	I0210 12:44:20.501377  588152 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0210 12:44:20.503264  588152 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 12:44:20.503288  588152 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0210 12:44:20.612258  588152 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0210 12:44:34.315277  588152 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0210 12:44:34.315365  588152 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-754359 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754359"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-754359
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (13.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-573306 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-573306 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.92777987s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (13.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 12:44:50.300720  588140 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0210 12:44:50.300776  588140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-573306
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-573306: exit status 85 (67.320053ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-754359 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |                     |
	|         | -p download-only-754359        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| delete  | -p download-only-754359        | download-only-754359 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC | 10 Feb 25 12:44 UTC |
	| start   | -o=json --download-only        | download-only-573306 | jenkins | v1.35.0 | 10 Feb 25 12:44 UTC |                     |
	|         | -p download-only-573306        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:44:36
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:44:36.417948  588404 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:44:36.418228  588404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:36.418237  588404 out.go:358] Setting ErrFile to fd 2...
	I0210 12:44:36.418241  588404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:44:36.418441  588404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 12:44:36.419059  588404 out.go:352] Setting JSON to true
	I0210 12:44:36.420041  588404 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8821,"bootTime":1739182655,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:44:36.420149  588404 start.go:139] virtualization: kvm guest
	I0210 12:44:36.422175  588404 out.go:97] [download-only-573306] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:44:36.422276  588404 notify.go:220] Checking for updates...
	I0210 12:44:36.423519  588404 out.go:169] MINIKUBE_LOCATION=20390
	I0210 12:44:36.424745  588404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:44:36.425918  588404 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:44:36.427052  588404 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:44:36.428118  588404 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 12:44:36.430217  588404 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 12:44:36.430405  588404 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:44:36.463074  588404 out.go:97] Using the kvm2 driver based on user configuration
	I0210 12:44:36.463104  588404 start.go:297] selected driver: kvm2
	I0210 12:44:36.463112  588404 start.go:901] validating driver "kvm2" against <nil>
	I0210 12:44:36.463557  588404 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:36.463658  588404 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20390-580861/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0210 12:44:36.478805  588404 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0210 12:44:36.478869  588404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:44:36.479426  588404 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0210 12:44:36.479566  588404 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:44:36.479594  588404 cni.go:84] Creating CNI manager for ""
	I0210 12:44:36.479641  588404 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0210 12:44:36.479649  588404 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0210 12:44:36.479697  588404 start.go:340] cluster config:
	{Name:download-only-573306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-573306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:44:36.479808  588404 iso.go:125] acquiring lock: {Name:mk23287370815f068f22272b7c777d3dcd1ee0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:44:36.481218  588404 out.go:97] Starting "download-only-573306" primary control-plane node in "download-only-573306" cluster
	I0210 12:44:36.481235  588404 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:44:37.044187  588404 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0210 12:44:37.044222  588404 cache.go:56] Caching tarball of preloaded images
	I0210 12:44:37.044426  588404 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0210 12:44:37.045907  588404 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0210 12:44:37.045923  588404 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0210 12:44:37.151770  588404 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20390-580861/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-573306 host does not exist
	  To start a cluster, run: "minikube start -p download-only-573306"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-573306
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 12:44:50.903727  588140 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-261143 --alsologtostderr --binary-mirror http://127.0.0.1:41717 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-261143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-261143
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (85.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-947434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-947434 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.484507354s)
helpers_test.go:175: Cleaning up "offline-crio-947434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-947434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-947434: (1.025299622s)
--- PASS: TestOffline (85.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-692802
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-692802: exit status 85 (55.42136ms)

                                                
                                                
-- stdout --
	* Profile "addons-692802" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-692802"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-692802
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-692802: exit status 85 (54.35001ms)

                                                
                                                
-- stdout --
	* Profile "addons-692802" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-692802"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (141.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-692802 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-692802 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m21.326863074s)
--- PASS: TestAddons/Setup (141.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-692802 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-692802 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-692802 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-692802 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a183f426-b329-49f1-9759-014bcd2a9b34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a183f426-b329-49f1-9759-014bcd2a9b34] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00462032s
addons_test.go:633: (dbg) Run:  kubectl --context addons-692802 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-692802 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-692802 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.724376ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-bf2d9" [aa3f3518-d768-442e-8f70-86cabb491756] Running
I0210 12:47:31.375594  588140 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 12:47:31.375621  588140 kapi.go:107] duration metric: took 15.976876ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003735851s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mkjph" [aa21886d-f389-45d4-ac25-ac8cb798cf7d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003531183s
addons_test.go:331: (dbg) Run:  kubectl --context addons-692802 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-692802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-692802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.04726024s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9lz6k" [e4247f43-fbfb-4cac-b91f-04339a8d01ac] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002955593s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable inspektor-gadget --alsologtostderr -v=1: (5.847471138s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.635783ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-kjjv7" [66f5e532-302a-4f4a-b0b4-875231c972a3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00400694s
addons_test.go:402: (dbg) Run:  kubectl --context addons-692802 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 15.98987ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-692802 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-692802 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5760a4df-f71d-474b-b631-5c022d8dcea3] Pending
helpers_test.go:344: "task-pv-pod" [5760a4df-f71d-474b-b631-5c022d8dcea3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5760a4df-f71d-474b-b631-5c022d8dcea3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.006476646s
addons_test.go:511: (dbg) Run:  kubectl --context addons-692802 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-692802 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-692802 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-692802 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-692802 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-692802 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-692802 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f3f4d475-4a8c-4d74-94e3-2efd26a29e66] Pending
helpers_test.go:344: "task-pv-pod-restore" [f3f4d475-4a8c-4d74-94e3-2efd26a29e66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f3f4d475-4a8c-4d74-94e3-2efd26a29e66] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004199134s
addons_test.go:553: (dbg) Run:  kubectl --context addons-692802 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-692802 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-692802 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable volumesnapshots --alsologtostderr -v=1: (1.033301358s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.865463654s)
--- PASS: TestAddons/parallel/CSI (60.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-692802 --alsologtostderr -v=1
I0210 12:47:31.359660  588140 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-8rtmk" [96585fbb-6712-4743-b535-8546cc27353d] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-8rtmk" [96585fbb-6712-4743-b535-8546cc27353d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-8rtmk" [96585fbb-6712-4743-b535-8546cc27353d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005332876s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable headlamp --alsologtostderr -v=1: (6.234941099s)
--- PASS: TestAddons/parallel/Headlamp (19.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-r9666" [a76ea288-a542-4746-b45b-40d841aeb691] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005228549s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-692802 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-692802 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ad267d5d-6e94-4920-850d-998970375bfd] Pending
helpers_test.go:344: "test-local-path" [ad267d5d-6e94-4920-850d-998970375bfd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ad267d5d-6e94-4920-850d-998970375bfd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004666474s
addons_test.go:906: (dbg) Run:  kubectl --context addons-692802 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 ssh "cat /opt/local-path-provisioner/pvc-e0786582-1bf5-4756-b266-564a46774f86_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-692802 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-692802 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.278899374s)
--- PASS: TestAddons/parallel/LocalPath (57.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8gl6m" [903ecc9f-03ab-4ced-b872-b46377fa27ab] Running
2025/02/10 12:47:50 [DEBUG] GET http://192.168.39.213:5000
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00436309s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-jpt5s" [bfbe77db-13f7-43a4-909d-ae1157759a3b] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003832142s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-692802 addons disable yakd --alsologtostderr -v=1: (5.847661283s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-692802
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-692802: (1m30.97358313s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-692802
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-692802
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-692802
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (85.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-070616 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-070616 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m23.770121123s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-070616 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-070616 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-070616 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-070616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-070616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-070616: (1.01634568s)
--- PASS: TestCertOptions (85.30s)

                                                
                                    
x
+
TestCertExpiration (322.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-959248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-959248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m39.131827739s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-959248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-959248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.262211348s)
helpers_test.go:175: Cleaning up "cert-expiration-959248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-959248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-959248: (1.055585089s)
--- PASS: TestCertExpiration (322.45s)

                                                
                                    
x
+
TestForceSystemdFlag (110.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-545256 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-545256 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m49.120086543s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-545256 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-545256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-545256
--- PASS: TestForceSystemdFlag (110.32s)

                                                
                                    
x
+
TestForceSystemdEnv (44.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-139209 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-139209 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.279668529s)
helpers_test.go:175: Cleaning up "force-systemd-env-139209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-139209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-139209: (1.026855853s)
--- PASS: TestForceSystemdEnv (44.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.78s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0210 13:45:36.427043  588140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:45:36.427228  588140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0210 13:45:36.457763  588140 install.go:62] docker-machine-driver-kvm2: exit status 1
W0210 13:45:36.458207  588140 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:45:36.458274  588140 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1497292716/001/docker-machine-driver-kvm2
I0210 13:45:36.733391  588140 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1497292716/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000983b18 gz:0xc000983ba0 tar:0xc000983b50 tar.bz2:0xc000983b60 tar.gz:0xc000983b70 tar.xz:0xc000983b80 tar.zst:0xc000983b90 tbz2:0xc000983b60 tgz:0xc000983b70 txz:0xc000983b80 tzst:0xc000983b90 xz:0xc000983ba8 zip:0xc000983bb0 zst:0xc000983bc0] Getters:map[file:0xc000913f40 http:0xc000878ff0 https:0xc000879040] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:45:36.733445  588140 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1497292716/001/docker-machine-driver-kvm2
I0210 13:45:39.264028  588140 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:45:39.264149  588140 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0210 13:45:39.297760  588140 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0210 13:45:39.297794  588140 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0210 13:45:39.297865  588140 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:45:39.297898  588140 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1497292716/002/docker-machine-driver-kvm2
I0210 13:45:39.355224  588140 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1497292716/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000983b18 gz:0xc000983ba0 tar:0xc000983b50 tar.bz2:0xc000983b60 tar.gz:0xc000983b70 tar.xz:0xc000983b80 tar.zst:0xc000983b90 tbz2:0xc000983b60 tgz:0xc000983b70 txz:0xc000983b80 tzst:0xc000983b90 xz:0xc000983ba8 zip:0xc000983bb0 zst:0xc000983bc0] Getters:map[file:0xc00201e6a0 http:0xc000879720 https:0xc000879770] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:45:39.355272  588140 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1497292716/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.78s)

                                                
                                    
x
+
TestErrorSpam/setup (42.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-385422 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-385422 --driver=kvm2  --container-runtime=crio
E0210 12:52:13.585121  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.591545  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.602915  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.624261  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.665657  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.747156  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:13.908726  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:14.230477  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:14.872550  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:16.154343  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:18.716841  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:23.838924  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:52:34.080709  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-385422 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-385422 --driver=kvm2  --container-runtime=crio: (42.33823794s)
--- PASS: TestErrorSpam/setup (42.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop: (2.333909558s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop: (1.700629269s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-385422 --log_dir /tmp/nospam-385422 stop: (1.285022889s)
--- PASS: TestErrorSpam/stop (5.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20390-580861/.minikube/files/etc/test/nested/copy/588140/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0210 12:52:54.562455  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:53:35.523928  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-729385 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.739945567s)
--- PASS: TestFunctional/serial/StartWithProxy (58.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 12:53:48.567768  588140 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-729385 --alsologtostderr -v=8: (48.759382942s)
functional_test.go:680: soft start took 48.760233131s for "functional-729385" cluster.
I0210 12:54:37.327642  588140 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (48.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-729385 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:3.1: (1.083406043s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:3.3: (1.185988189s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 cache add registry.k8s.io/pause:latest: (1.144012973s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-729385 /tmp/TestFunctionalserialCacheCmdcacheadd_local428805876/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache add minikube-local-cache-test:functional-729385
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 cache add minikube-local-cache-test:functional-729385: (1.876357454s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache delete minikube-local-cache-test:functional-729385
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-729385
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.64074ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 kubectl -- --context functional-729385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-729385 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 12:54:57.448431  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-729385 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.927832089s)
functional_test.go:778: restart took 40.927972781s for "functional-729385" cluster.
I0210 12:55:26.323173  588140 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (40.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-729385 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 logs: (1.43639644s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 logs --file /tmp/TestFunctionalserialLogsFileCmd2295619810/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 logs --file /tmp/TestFunctionalserialLogsFileCmd2295619810/001/logs.txt: (1.518279928s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-729385 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-729385
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-729385: exit status 115 (273.155057ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.70:31511 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-729385 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 config get cpus: exit status 14 (84.147973ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 config get cpus: exit status 14 (56.274043ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-729385 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-729385 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 595328: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-729385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (174.529593ms)

                                                
                                                
-- stdout --
	* [functional-729385] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:55:35.110092  595130 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:55:35.110201  595130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:35.110208  595130 out.go:358] Setting ErrFile to fd 2...
	I0210 12:55:35.110212  595130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:35.110431  595130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 12:55:35.110976  595130 out.go:352] Setting JSON to false
	I0210 12:55:35.112162  595130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9480,"bootTime":1739182655,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:55:35.112246  595130 start.go:139] virtualization: kvm guest
	I0210 12:55:35.114433  595130 out.go:177] * [functional-729385] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:55:35.115729  595130 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:55:35.115800  595130 notify.go:220] Checking for updates...
	I0210 12:55:35.117996  595130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:55:35.119141  595130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:55:35.120117  595130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:55:35.121097  595130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:55:35.122054  595130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:55:35.123572  595130 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:55:35.124141  595130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:35.124210  595130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:35.142827  595130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0210 12:55:35.143421  595130 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:35.144175  595130 main.go:141] libmachine: Using API Version  1
	I0210 12:55:35.144205  595130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:35.144754  595130 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:35.145064  595130 main.go:141] libmachine: (functional-729385) Calling .DriverName
	I0210 12:55:35.145362  595130 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:55:35.145799  595130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:35.145901  595130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:35.167687  595130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0210 12:55:35.168161  595130 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:35.168714  595130 main.go:141] libmachine: Using API Version  1
	I0210 12:55:35.168744  595130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:35.169253  595130 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:35.169441  595130 main.go:141] libmachine: (functional-729385) Calling .DriverName
	I0210 12:55:35.209495  595130 out.go:177] * Using the kvm2 driver based on existing profile
	I0210 12:55:35.210640  595130 start.go:297] selected driver: kvm2
	I0210 12:55:35.210661  595130 start.go:901] validating driver "kvm2" against &{Name:functional-729385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-729385 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:55:35.210831  595130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:55:35.213124  595130 out.go:201] 
	W0210 12:55:35.214167  595130 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 12:55:35.215287  595130 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-729385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-729385 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.149084ms)

                                                
                                                
-- stdout --
	* [functional-729385] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:55:34.939168  595084 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:55:34.939471  595084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:34.939483  595084 out.go:358] Setting ErrFile to fd 2...
	I0210 12:55:34.939490  595084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:55:34.939879  595084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 12:55:34.940788  595084 out.go:352] Setting JSON to false
	I0210 12:55:34.941905  595084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9480,"bootTime":1739182655,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:55:34.942059  595084 start.go:139] virtualization: kvm guest
	I0210 12:55:34.943923  595084 out.go:177] * [functional-729385] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 12:55:34.945443  595084 notify.go:220] Checking for updates...
	I0210 12:55:34.945524  595084 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:55:34.946954  595084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:55:34.948053  595084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 12:55:34.949443  595084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 12:55:34.950947  595084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:55:34.952267  595084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:55:34.954191  595084 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 12:55:34.954918  595084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:34.954995  595084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:34.974850  595084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0210 12:55:34.975464  595084 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:34.976178  595084 main.go:141] libmachine: Using API Version  1
	I0210 12:55:34.976201  595084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:34.976644  595084 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:34.976841  595084 main.go:141] libmachine: (functional-729385) Calling .DriverName
	I0210 12:55:34.977092  595084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:55:34.977483  595084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 12:55:34.977518  595084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 12:55:34.993891  595084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0210 12:55:34.994475  595084 main.go:141] libmachine: () Calling .GetVersion
	I0210 12:55:34.995002  595084 main.go:141] libmachine: Using API Version  1
	I0210 12:55:34.995027  595084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 12:55:34.995433  595084 main.go:141] libmachine: () Calling .GetMachineName
	I0210 12:55:34.995611  595084 main.go:141] libmachine: (functional-729385) Calling .DriverName
	I0210 12:55:35.031533  595084 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0210 12:55:35.032849  595084 start.go:297] selected driver: kvm2
	I0210 12:55:35.032875  595084 start.go:901] validating driver "kvm2" against &{Name:functional-729385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-729385 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:55:35.033051  595084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:55:35.035489  595084 out.go:201] 
	W0210 12:55:35.036793  595084 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 12:55:35.038132  595084 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-729385 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-729385 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-s8nlt" [e20b2f25-3a4e-42fd-a964-a27cdb8527dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-s8nlt" [e20b2f25-3a4e-42fd-a964-a27cdb8527dc] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006132573s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service hello-node-connect --url
2025/02/10 12:55:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.70:30655
functional_test.go:1692: http://192.168.39.70:30655: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-s8nlt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.70:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.70:30655
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [557e133f-a866-41b7-8097-0fd380e15169] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003041849s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-729385 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-729385 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-729385 get pvc myclaim -o=json
I0210 12:55:41.175138  588140 retry.go:31] will retry after 1.679485798s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:14e2f6a8-38c9-4fe3-a746-b88b5f86c421 ResourceVersion:725 Generation:0 CreationTimestamp:2025-02-10 12:55:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017a0690 VolumeMode:0xc0017a06a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-729385 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-729385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f941fc59-4463-4595-b4bb-7165a4aba471] Pending
helpers_test.go:344: "sp-pod" [f941fc59-4463-4595-b4bb-7165a4aba471] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f941fc59-4463-4595-b4bb-7165a4aba471] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003716531s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-729385 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-729385 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-729385 delete -f testdata/storage-provisioner/pod.yaml: (2.873982298s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-729385 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [51e2054b-2a91-4cf2-83cf-3d03e84ff6b5] Pending
helpers_test.go:344: "sp-pod" [51e2054b-2a91-4cf2-83cf-3d03e84ff6b5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [51e2054b-2a91-4cf2-83cf-3d03e84ff6b5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004065088s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-729385 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh -n functional-729385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cp functional-729385:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2224996928/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh -n functional-729385 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh -n functional-729385 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-729385 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-tsgrs" [a27d8ff3-7b76-49f0-8cc0-2d324b1faf0d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-tsgrs" [a27d8ff3-7b76-49f0-8cc0-2d324b1faf0d] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004183441s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-729385 exec mysql-58ccfd96bb-tsgrs -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-729385 exec mysql-58ccfd96bb-tsgrs -- mysql -ppassword -e "show databases;": exit status 1 (125.816557ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 12:56:20.495236  588140 retry.go:31] will retry after 1.155538924s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-729385 exec mysql-58ccfd96bb-tsgrs -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-729385 exec mysql-58ccfd96bb-tsgrs -- mysql -ppassword -e "show databases;": exit status 1 (117.150716ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 12:56:21.769011  588140 retry.go:31] will retry after 2.086147782s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-729385 exec mysql-58ccfd96bb-tsgrs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/588140/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /etc/test/nested/copy/588140/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/588140.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /etc/ssl/certs/588140.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/588140.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /usr/share/ca-certificates/588140.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/5881402.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /etc/ssl/certs/5881402.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/5881402.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /usr/share/ca-certificates/5881402.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-729385 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "sudo systemctl is-active docker": exit status 1 (263.042575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "sudo systemctl is-active containerd": exit status 1 (226.633769ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2305: (dbg) Done: out/minikube-linux-amd64 license: (1.081482516s)
--- PASS: TestFunctional/parallel/License (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-729385 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-729385 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-tc2bh" [d97aec49-0ccc-4448-990d-9c668f5a6534] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-tc2bh" [d97aec49-0ccc-4448-990d-9c668f5a6534] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004091738s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdany-port4271792457/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739192133830879555" to /tmp/TestFunctionalparallelMountCmdany-port4271792457/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739192133830879555" to /tmp/TestFunctionalparallelMountCmdany-port4271792457/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739192133830879555" to /tmp/TestFunctionalparallelMountCmdany-port4271792457/001/test-1739192133830879555
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.8086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:55:34.088080  588140 retry.go:31] will retry after 588.351548ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 12:55 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 12:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 12:55 test-1739192133830879555
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh cat /mount-9p/test-1739192133830879555
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-729385 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6fcb8043-911f-4c01-8e5a-4a4a21a8c029] Pending
helpers_test.go:344: "busybox-mount" [6fcb8043-911f-4c01-8e5a-4a4a21a8c029] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6fcb8043-911f-4c01-8e5a-4a4a21a8c029] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6fcb8043-911f-4c01-8e5a-4a4a21a8c029] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004098677s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-729385 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdany-port4271792457/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "408.448679ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "54.065162ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "291.719687ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "60.289318ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdspecific-port3371370320/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.396646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:55:44.913215  588140 retry.go:31] will retry after 300.743899ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdspecific-port3371370320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "sudo umount -f /mount-9p": exit status 1 (197.377891ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-729385 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdspecific-port3371370320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service list -o json
functional_test.go:1511: Took "431.63719ms" to run "out/minikube-linux-amd64 -p functional-729385 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.70:30515
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.70:30515
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T" /mount1: exit status 1 (259.06675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:55:46.480666  588140 retry.go:31] will retry after 588.334042ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-729385 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-729385 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2107477243/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-729385 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-729385
localhost/kicbase/echo-server:functional-729385
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-729385 image ls --format short --alsologtostderr:
I0210 12:56:01.600882  597039 out.go:345] Setting OutFile to fd 1 ...
I0210 12:56:01.600999  597039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.601008  597039 out.go:358] Setting ErrFile to fd 2...
I0210 12:56:01.601012  597039 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.601243  597039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
I0210 12:56:01.601847  597039 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.601963  597039 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.602480  597039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.602555  597039 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.617191  597039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
I0210 12:56:01.617811  597039 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.618468  597039 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.618495  597039 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.618817  597039 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.618999  597039 main.go:141] libmachine: (functional-729385) Calling .GetState
I0210 12:56:01.620903  597039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.620952  597039 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.636301  597039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
I0210 12:56:01.636769  597039 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.637285  597039 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.637309  597039 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.637641  597039 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.637848  597039 main.go:141] libmachine: (functional-729385) Calling .DriverName
I0210 12:56:01.638060  597039 ssh_runner.go:195] Run: systemctl --version
I0210 12:56:01.638091  597039 main.go:141] libmachine: (functional-729385) Calling .GetSSHHostname
I0210 12:56:01.641485  597039 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.641995  597039 main.go:141] libmachine: (functional-729385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:13:08", ip: ""} in network mk-functional-729385: {Iface:virbr1 ExpiryTime:2025-02-10 13:53:05 +0000 UTC Type:0 Mac:52:54:00:ed:13:08 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-729385 Clientid:01:52:54:00:ed:13:08}
I0210 12:56:01.642023  597039 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined IP address 192.168.39.70 and MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.642150  597039 main.go:141] libmachine: (functional-729385) Calling .GetSSHPort
I0210 12:56:01.642325  597039 main.go:141] libmachine: (functional-729385) Calling .GetSSHKeyPath
I0210 12:56:01.642495  597039 main.go:141] libmachine: (functional-729385) Calling .GetSSHUsername
I0210 12:56:01.642632  597039 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/functional-729385/id_rsa Username:docker}
I0210 12:56:01.721770  597039 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:56:01.780121  597039 main.go:141] libmachine: Making call to close driver server
I0210 12:56:01.780141  597039 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:01.780498  597039 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:01.780538  597039 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:01.780551  597039 main.go:141] libmachine: Making call to close driver server
I0210 12:56:01.780562  597039 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:01.780823  597039 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:01.780846  597039 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:01.780864  597039 main.go:141] libmachine: (functional-729385) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-729385 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| localhost/kicbase/echo-server           | functional-729385  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-729385  | a873b73cbe08b | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-729385 image ls --format table --alsologtostderr:
I0210 12:56:02.095416  597147 out.go:345] Setting OutFile to fd 1 ...
I0210 12:56:02.095525  597147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:02.095533  597147 out.go:358] Setting ErrFile to fd 2...
I0210 12:56:02.095537  597147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:02.095802  597147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
I0210 12:56:02.096661  597147 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:02.096805  597147 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:02.097285  597147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:02.097359  597147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:02.112681  597147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
I0210 12:56:02.113148  597147 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:02.113862  597147 main.go:141] libmachine: Using API Version  1
I0210 12:56:02.113890  597147 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:02.114438  597147 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:02.114657  597147 main.go:141] libmachine: (functional-729385) Calling .GetState
I0210 12:56:02.117096  597147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:02.117142  597147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:02.133539  597147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
I0210 12:56:02.134002  597147 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:02.134590  597147 main.go:141] libmachine: Using API Version  1
I0210 12:56:02.134614  597147 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:02.135060  597147 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:02.135289  597147 main.go:141] libmachine: (functional-729385) Calling .DriverName
I0210 12:56:02.135500  597147 ssh_runner.go:195] Run: systemctl --version
I0210 12:56:02.135530  597147 main.go:141] libmachine: (functional-729385) Calling .GetSSHHostname
I0210 12:56:02.138461  597147 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:02.138864  597147 main.go:141] libmachine: (functional-729385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:13:08", ip: ""} in network mk-functional-729385: {Iface:virbr1 ExpiryTime:2025-02-10 13:53:05 +0000 UTC Type:0 Mac:52:54:00:ed:13:08 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-729385 Clientid:01:52:54:00:ed:13:08}
I0210 12:56:02.138898  597147 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined IP address 192.168.39.70 and MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:02.138994  597147 main.go:141] libmachine: (functional-729385) Calling .GetSSHPort
I0210 12:56:02.139197  597147 main.go:141] libmachine: (functional-729385) Calling .GetSSHKeyPath
I0210 12:56:02.139338  597147 main.go:141] libmachine: (functional-729385) Calling .GetSSHUsername
I0210 12:56:02.139524  597147 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/functional-729385/id_rsa Username:docker}
I0210 12:56:02.266520  597147 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:56:02.408925  597147 main.go:141] libmachine: Making call to close driver server
I0210 12:56:02.408946  597147 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:02.409269  597147 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:02.409283  597147 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:02.409298  597147 main.go:141] libmachine: Making call to close driver server
I0210 12:56:02.409305  597147 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:02.411100  597147 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:02.411140  597147 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:02.411149  597147 main.go:141] libmachine: (functional-729385) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-729385 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"i
d":"a873b73cbe08b046eff94f17aae0d8b769dc1896f2402eba7f17d283b8c772e6","repoDigests":["localhost/minikube-local-cache-test@sha256:25418b330e1aea404b06139fa6959d6ddaf71f0ddea766bfde99de8140319c81"],"repoTags":["localhost/minikube-local-cache-test:functional-729385"],"size":"3330"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"e29f9c7391fd92d96bc72
026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],
"size":"94963761"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-729385"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da86e6ba6ca197bf6bc5
e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0c
c619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0
b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-729385 image ls --format json --alsologtostderr:
I0210 12:56:01.843737  597086 out.go:345] Setting OutFile to fd 1 ...
I0210 12:56:01.844361  597086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.844379  597086 out.go:358] Setting ErrFile to fd 2...
I0210 12:56:01.844387  597086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.844819  597086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
I0210 12:56:01.846264  597086 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.846476  597086 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.847480  597086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.847567  597086 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.862542  597086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
I0210 12:56:01.863074  597086 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.863752  597086 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.863780  597086 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.864161  597086 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.864427  597086 main.go:141] libmachine: (functional-729385) Calling .GetState
I0210 12:56:01.866447  597086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.866509  597086 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.884443  597086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
I0210 12:56:01.884877  597086 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.885272  597086 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.885296  597086 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.885650  597086 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.885819  597086 main.go:141] libmachine: (functional-729385) Calling .DriverName
I0210 12:56:01.886052  597086 ssh_runner.go:195] Run: systemctl --version
I0210 12:56:01.886087  597086 main.go:141] libmachine: (functional-729385) Calling .GetSSHHostname
I0210 12:56:01.888624  597086 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.889068  597086 main.go:141] libmachine: (functional-729385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:13:08", ip: ""} in network mk-functional-729385: {Iface:virbr1 ExpiryTime:2025-02-10 13:53:05 +0000 UTC Type:0 Mac:52:54:00:ed:13:08 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-729385 Clientid:01:52:54:00:ed:13:08}
I0210 12:56:01.889091  597086 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined IP address 192.168.39.70 and MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.889247  597086 main.go:141] libmachine: (functional-729385) Calling .GetSSHPort
I0210 12:56:01.889419  597086 main.go:141] libmachine: (functional-729385) Calling .GetSSHKeyPath
I0210 12:56:01.889557  597086 main.go:141] libmachine: (functional-729385) Calling .GetSSHUsername
I0210 12:56:01.889727  597086 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/functional-729385/id_rsa Username:docker}
I0210 12:56:01.975007  597086 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:56:02.030379  597086 main.go:141] libmachine: Making call to close driver server
I0210 12:56:02.030398  597086 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:02.030682  597086 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:02.030700  597086 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:02.030718  597086 main.go:141] libmachine: Making call to close driver server
I0210 12:56:02.030726  597086 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:02.030965  597086 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:02.030987  597086 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:02.030992  597086 main.go:141] libmachine: (functional-729385) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-729385 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-729385
size: "4943877"
- id: a873b73cbe08b046eff94f17aae0d8b769dc1896f2402eba7f17d283b8c772e6
repoDigests:
- localhost/minikube-local-cache-test@sha256:25418b330e1aea404b06139fa6959d6ddaf71f0ddea766bfde99de8140319c81
repoTags:
- localhost/minikube-local-cache-test:functional-729385
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-729385 image ls --format yaml --alsologtostderr:
I0210 12:56:01.600181  597038 out.go:345] Setting OutFile to fd 1 ...
I0210 12:56:01.600447  597038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.600456  597038 out.go:358] Setting ErrFile to fd 2...
I0210 12:56:01.600461  597038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:56:01.600663  597038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
I0210 12:56:01.601252  597038 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.601374  597038 config.go:182] Loaded profile config "functional-729385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0210 12:56:01.601746  597038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.601818  597038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.617361  597038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
I0210 12:56:01.617809  597038 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.618414  597038 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.618437  597038 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.618818  597038 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.619030  597038 main.go:141] libmachine: (functional-729385) Calling .GetState
I0210 12:56:01.621118  597038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0210 12:56:01.621170  597038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0210 12:56:01.636342  597038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
I0210 12:56:01.636784  597038 main.go:141] libmachine: () Calling .GetVersion
I0210 12:56:01.637302  597038 main.go:141] libmachine: Using API Version  1
I0210 12:56:01.637323  597038 main.go:141] libmachine: () Calling .SetConfigRaw
I0210 12:56:01.637714  597038 main.go:141] libmachine: () Calling .GetMachineName
I0210 12:56:01.637897  597038 main.go:141] libmachine: (functional-729385) Calling .DriverName
I0210 12:56:01.638101  597038 ssh_runner.go:195] Run: systemctl --version
I0210 12:56:01.638129  597038 main.go:141] libmachine: (functional-729385) Calling .GetSSHHostname
I0210 12:56:01.641112  597038 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.641531  597038 main.go:141] libmachine: (functional-729385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:13:08", ip: ""} in network mk-functional-729385: {Iface:virbr1 ExpiryTime:2025-02-10 13:53:05 +0000 UTC Type:0 Mac:52:54:00:ed:13:08 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:functional-729385 Clientid:01:52:54:00:ed:13:08}
I0210 12:56:01.641558  597038 main.go:141] libmachine: (functional-729385) DBG | domain functional-729385 has defined IP address 192.168.39.70 and MAC address 52:54:00:ed:13:08 in network mk-functional-729385
I0210 12:56:01.641691  597038 main.go:141] libmachine: (functional-729385) Calling .GetSSHPort
I0210 12:56:01.641849  597038 main.go:141] libmachine: (functional-729385) Calling .GetSSHKeyPath
I0210 12:56:01.642003  597038 main.go:141] libmachine: (functional-729385) Calling .GetSSHUsername
I0210 12:56:01.642187  597038 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/functional-729385/id_rsa Username:docker}
I0210 12:56:01.721793  597038 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:56:01.781865  597038 main.go:141] libmachine: Making call to close driver server
I0210 12:56:01.781880  597038 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:01.784134  597038 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:01.784151  597038 main.go:141] libmachine: Making call to close connection to plugin binary
I0210 12:56:01.784162  597038 main.go:141] libmachine: Making call to close driver server
I0210 12:56:01.784180  597038 main.go:141] libmachine: (functional-729385) Calling .Close
I0210 12:56:01.784514  597038 main.go:141] libmachine: (functional-729385) DBG | Closing plugin on server side
I0210 12:56:01.784531  597038 main.go:141] libmachine: Successfully made call to close driver server
I0210 12:56:01.784552  597038 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.258509657s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-729385
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image load --daemon kicbase/echo-server:functional-729385 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 image load --daemon kicbase/echo-server:functional-729385 --alsologtostderr: (1.155414945s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image load --daemon kicbase/echo-server:functional-729385 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-729385
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image load --daemon kicbase/echo-server:functional-729385 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image save kicbase/echo-server:functional-729385 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-linux-amd64 -p functional-729385 image save kicbase/echo-server:functional-729385 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.460280097s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image rm kicbase/echo-server:functional-729385 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-729385
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-729385 image save --daemon kicbase/echo-server:functional-729385 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-729385
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-729385
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-729385
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-729385
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-781985 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 12:57:13.585194  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:57:41.290349  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-781985 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.73983026s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-781985 -- rollout status deployment/busybox: (4.935456347s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-4pnph -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-rq2mz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-t9cjr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-4pnph -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-rq2mz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-t9cjr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-4pnph -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-rq2mz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-t9cjr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-4pnph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-4pnph -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-rq2mz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-rq2mz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-t9cjr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-781985 -- exec busybox-58667487b6-t9cjr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-781985 -v=7 --alsologtostderr
E0210 13:00:33.642883  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.649331  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.660755  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.682239  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.723763  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.805260  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:33.967507  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:34.289264  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:34.931003  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:36.212439  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:38.773971  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:43.896010  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:54.138285  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-781985 -v=7 --alsologtostderr: (57.742497639s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-781985 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp testdata/cp-test.txt ha-781985:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile226298417/001/cp-test_ha-781985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985:/home/docker/cp-test.txt ha-781985-m02:/home/docker/cp-test_ha-781985_ha-781985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test_ha-781985_ha-781985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985:/home/docker/cp-test.txt ha-781985-m03:/home/docker/cp-test_ha-781985_ha-781985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test_ha-781985_ha-781985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985:/home/docker/cp-test.txt ha-781985-m04:/home/docker/cp-test_ha-781985_ha-781985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test_ha-781985_ha-781985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp testdata/cp-test.txt ha-781985-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile226298417/001/cp-test_ha-781985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m02:/home/docker/cp-test.txt ha-781985:/home/docker/cp-test_ha-781985-m02_ha-781985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test_ha-781985-m02_ha-781985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m02:/home/docker/cp-test.txt ha-781985-m03:/home/docker/cp-test_ha-781985-m02_ha-781985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test_ha-781985-m02_ha-781985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m02:/home/docker/cp-test.txt ha-781985-m04:/home/docker/cp-test_ha-781985-m02_ha-781985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test_ha-781985-m02_ha-781985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp testdata/cp-test.txt ha-781985-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile226298417/001/cp-test_ha-781985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m03:/home/docker/cp-test.txt ha-781985:/home/docker/cp-test_ha-781985-m03_ha-781985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test_ha-781985-m03_ha-781985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m03:/home/docker/cp-test.txt ha-781985-m02:/home/docker/cp-test_ha-781985-m03_ha-781985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test_ha-781985-m03_ha-781985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m03:/home/docker/cp-test.txt ha-781985-m04:/home/docker/cp-test_ha-781985-m03_ha-781985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test_ha-781985-m03_ha-781985-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp testdata/cp-test.txt ha-781985-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile226298417/001/cp-test_ha-781985-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m04:/home/docker/cp-test.txt ha-781985:/home/docker/cp-test_ha-781985-m04_ha-781985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985 "sudo cat /home/docker/cp-test_ha-781985-m04_ha-781985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m04:/home/docker/cp-test.txt ha-781985-m02:/home/docker/cp-test_ha-781985-m04_ha-781985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m02 "sudo cat /home/docker/cp-test_ha-781985-m04_ha-781985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 cp ha-781985-m04:/home/docker/cp-test.txt ha-781985-m03:/home/docker/cp-test_ha-781985-m04_ha-781985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 ssh -n ha-781985-m03 "sudo cat /home/docker/cp-test_ha-781985-m04_ha-781985-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 node stop m02 -v=7 --alsologtostderr
E0210 13:01:14.620519  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:01:55.582561  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:02:13.584991  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-781985 node stop m02 -v=7 --alsologtostderr: (1m30.993808008s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr: exit status 7 (688.799992ms)

                                                
                                                
-- stdout --
	ha-781985
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-781985-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781985-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-781985-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:02:42.953670  601950 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:02:42.953811  601950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:02:42.953823  601950 out.go:358] Setting ErrFile to fd 2...
	I0210 13:02:42.953827  601950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:02:42.954032  601950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:02:42.954229  601950 out.go:352] Setting JSON to false
	I0210 13:02:42.954259  601950 mustload.go:65] Loading cluster: ha-781985
	I0210 13:02:42.954312  601950 notify.go:220] Checking for updates...
	I0210 13:02:42.954817  601950 config.go:182] Loaded profile config "ha-781985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:02:42.954849  601950 status.go:174] checking status of ha-781985 ...
	I0210 13:02:42.955451  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:42.955503  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:42.976803  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0210 13:02:42.977335  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:42.978140  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:42.978176  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:42.978534  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:42.978823  601950 main.go:141] libmachine: (ha-781985) Calling .GetState
	I0210 13:02:42.980662  601950 status.go:371] ha-781985 host status = "Running" (err=<nil>)
	I0210 13:02:42.980680  601950 host.go:66] Checking if "ha-781985" exists ...
	I0210 13:02:42.981067  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:42.981121  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:42.995812  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0210 13:02:42.996195  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:42.996668  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:42.996691  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:42.996994  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:42.997154  601950 main.go:141] libmachine: (ha-781985) Calling .GetIP
	I0210 13:02:42.999918  601950 main.go:141] libmachine: (ha-781985) DBG | domain ha-781985 has defined MAC address 52:54:00:08:ce:90 in network mk-ha-781985
	I0210 13:02:43.000378  601950 main.go:141] libmachine: (ha-781985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:ce:90", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 13:56:40 +0000 UTC Type:0 Mac:52:54:00:08:ce:90 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-781985 Clientid:01:52:54:00:08:ce:90}
	I0210 13:02:43.000405  601950 main.go:141] libmachine: (ha-781985) DBG | domain ha-781985 has defined IP address 192.168.39.174 and MAC address 52:54:00:08:ce:90 in network mk-ha-781985
	I0210 13:02:43.000559  601950 host.go:66] Checking if "ha-781985" exists ...
	I0210 13:02:43.000857  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.000893  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.015719  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0210 13:02:43.016146  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.016650  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.016676  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.017030  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.017224  601950 main.go:141] libmachine: (ha-781985) Calling .DriverName
	I0210 13:02:43.017430  601950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:02:43.017467  601950 main.go:141] libmachine: (ha-781985) Calling .GetSSHHostname
	I0210 13:02:43.020566  601950 main.go:141] libmachine: (ha-781985) DBG | domain ha-781985 has defined MAC address 52:54:00:08:ce:90 in network mk-ha-781985
	I0210 13:02:43.021069  601950 main.go:141] libmachine: (ha-781985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:ce:90", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 13:56:40 +0000 UTC Type:0 Mac:52:54:00:08:ce:90 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-781985 Clientid:01:52:54:00:08:ce:90}
	I0210 13:02:43.021101  601950 main.go:141] libmachine: (ha-781985) DBG | domain ha-781985 has defined IP address 192.168.39.174 and MAC address 52:54:00:08:ce:90 in network mk-ha-781985
	I0210 13:02:43.021269  601950 main.go:141] libmachine: (ha-781985) Calling .GetSSHPort
	I0210 13:02:43.021452  601950 main.go:141] libmachine: (ha-781985) Calling .GetSSHKeyPath
	I0210 13:02:43.021619  601950 main.go:141] libmachine: (ha-781985) Calling .GetSSHUsername
	I0210 13:02:43.021755  601950 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/ha-781985/id_rsa Username:docker}
	I0210 13:02:43.117894  601950 ssh_runner.go:195] Run: systemctl --version
	I0210 13:02:43.125505  601950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:02:43.146154  601950 kubeconfig.go:125] found "ha-781985" server: "https://192.168.39.254:8443"
	I0210 13:02:43.146217  601950 api_server.go:166] Checking apiserver status ...
	I0210 13:02:43.146264  601950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:43.168017  601950 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup
	W0210 13:02:43.180082  601950 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:02:43.180138  601950 ssh_runner.go:195] Run: ls
	I0210 13:02:43.185083  601950 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 13:02:43.190240  601950 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 13:02:43.190262  601950 status.go:463] ha-781985 apiserver status = Running (err=<nil>)
	I0210 13:02:43.190276  601950 status.go:176] ha-781985 status: &{Name:ha-781985 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:02:43.190303  601950 status.go:174] checking status of ha-781985-m02 ...
	I0210 13:02:43.190703  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.190776  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.205782  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0210 13:02:43.206276  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.206831  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.206857  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.207213  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.207446  601950 main.go:141] libmachine: (ha-781985-m02) Calling .GetState
	I0210 13:02:43.209247  601950 status.go:371] ha-781985-m02 host status = "Stopped" (err=<nil>)
	I0210 13:02:43.209266  601950 status.go:384] host is not running, skipping remaining checks
	I0210 13:02:43.209274  601950 status.go:176] ha-781985-m02 status: &{Name:ha-781985-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:02:43.209297  601950 status.go:174] checking status of ha-781985-m03 ...
	I0210 13:02:43.209723  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.209777  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.225768  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0210 13:02:43.226399  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.227032  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.227071  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.227457  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.227678  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetState
	I0210 13:02:43.229232  601950 status.go:371] ha-781985-m03 host status = "Running" (err=<nil>)
	I0210 13:02:43.229249  601950 host.go:66] Checking if "ha-781985-m03" exists ...
	I0210 13:02:43.229535  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.229573  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.243969  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0210 13:02:43.244449  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.244903  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.244925  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.245284  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.245465  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetIP
	I0210 13:02:43.248149  601950 main.go:141] libmachine: (ha-781985-m03) DBG | domain ha-781985-m03 has defined MAC address 52:54:00:f7:96:18 in network mk-ha-781985
	I0210 13:02:43.248666  601950 main.go:141] libmachine: (ha-781985-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:96:18", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 13:58:43 +0000 UTC Type:0 Mac:52:54:00:f7:96:18 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-781985-m03 Clientid:01:52:54:00:f7:96:18}
	I0210 13:02:43.248694  601950 main.go:141] libmachine: (ha-781985-m03) DBG | domain ha-781985-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:f7:96:18 in network mk-ha-781985
	I0210 13:02:43.248860  601950 host.go:66] Checking if "ha-781985-m03" exists ...
	I0210 13:02:43.249165  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.249220  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.264502  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0210 13:02:43.264952  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.265504  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.265528  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.265891  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.266156  601950 main.go:141] libmachine: (ha-781985-m03) Calling .DriverName
	I0210 13:02:43.266398  601950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:02:43.266426  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetSSHHostname
	I0210 13:02:43.269321  601950 main.go:141] libmachine: (ha-781985-m03) DBG | domain ha-781985-m03 has defined MAC address 52:54:00:f7:96:18 in network mk-ha-781985
	I0210 13:02:43.269739  601950 main.go:141] libmachine: (ha-781985-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:96:18", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 13:58:43 +0000 UTC Type:0 Mac:52:54:00:f7:96:18 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-781985-m03 Clientid:01:52:54:00:f7:96:18}
	I0210 13:02:43.269768  601950 main.go:141] libmachine: (ha-781985-m03) DBG | domain ha-781985-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:f7:96:18 in network mk-ha-781985
	I0210 13:02:43.269915  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetSSHPort
	I0210 13:02:43.270107  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetSSHKeyPath
	I0210 13:02:43.270258  601950 main.go:141] libmachine: (ha-781985-m03) Calling .GetSSHUsername
	I0210 13:02:43.270389  601950 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/ha-781985-m03/id_rsa Username:docker}
	I0210 13:02:43.353377  601950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:02:43.374597  601950 kubeconfig.go:125] found "ha-781985" server: "https://192.168.39.254:8443"
	I0210 13:02:43.374640  601950 api_server.go:166] Checking apiserver status ...
	I0210 13:02:43.374684  601950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:02:43.393312  601950 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup
	W0210 13:02:43.407943  601950 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:02:43.408024  601950 ssh_runner.go:195] Run: ls
	I0210 13:02:43.413907  601950 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0210 13:02:43.418719  601950 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0210 13:02:43.418753  601950 status.go:463] ha-781985-m03 apiserver status = Running (err=<nil>)
	I0210 13:02:43.418765  601950 status.go:176] ha-781985-m03 status: &{Name:ha-781985-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:02:43.418785  601950 status.go:174] checking status of ha-781985-m04 ...
	I0210 13:02:43.419201  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.419249  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.435406  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44809
	I0210 13:02:43.435847  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.436423  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.436456  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.436808  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.437047  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetState
	I0210 13:02:43.438774  601950 status.go:371] ha-781985-m04 host status = "Running" (err=<nil>)
	I0210 13:02:43.438793  601950 host.go:66] Checking if "ha-781985-m04" exists ...
	I0210 13:02:43.439080  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.439118  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.454529  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0210 13:02:43.455017  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.455642  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.455691  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.456000  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.456235  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetIP
	I0210 13:02:43.459012  601950 main.go:141] libmachine: (ha-781985-m04) DBG | domain ha-781985-m04 has defined MAC address 52:54:00:94:93:27 in network mk-ha-781985
	I0210 13:02:43.459474  601950 main.go:141] libmachine: (ha-781985-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:93:27", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 14:00:15 +0000 UTC Type:0 Mac:52:54:00:94:93:27 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-781985-m04 Clientid:01:52:54:00:94:93:27}
	I0210 13:02:43.459523  601950 main.go:141] libmachine: (ha-781985-m04) DBG | domain ha-781985-m04 has defined IP address 192.168.39.196 and MAC address 52:54:00:94:93:27 in network mk-ha-781985
	I0210 13:02:43.459627  601950 host.go:66] Checking if "ha-781985-m04" exists ...
	I0210 13:02:43.460065  601950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:02:43.460116  601950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:02:43.475639  601950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0210 13:02:43.476076  601950 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:02:43.476582  601950 main.go:141] libmachine: Using API Version  1
	I0210 13:02:43.476607  601950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:02:43.476947  601950 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:02:43.477138  601950 main.go:141] libmachine: (ha-781985-m04) Calling .DriverName
	I0210 13:02:43.477336  601950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:02:43.477357  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetSSHHostname
	I0210 13:02:43.479870  601950 main.go:141] libmachine: (ha-781985-m04) DBG | domain ha-781985-m04 has defined MAC address 52:54:00:94:93:27 in network mk-ha-781985
	I0210 13:02:43.480303  601950 main.go:141] libmachine: (ha-781985-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:93:27", ip: ""} in network mk-ha-781985: {Iface:virbr1 ExpiryTime:2025-02-10 14:00:15 +0000 UTC Type:0 Mac:52:54:00:94:93:27 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-781985-m04 Clientid:01:52:54:00:94:93:27}
	I0210 13:02:43.480331  601950 main.go:141] libmachine: (ha-781985-m04) DBG | domain ha-781985-m04 has defined IP address 192.168.39.196 and MAC address 52:54:00:94:93:27 in network mk-ha-781985
	I0210 13:02:43.480477  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetSSHPort
	I0210 13:02:43.480696  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetSSHKeyPath
	I0210 13:02:43.480847  601950 main.go:141] libmachine: (ha-781985-m04) Calling .GetSSHUsername
	I0210 13:02:43.480999  601950 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/ha-781985-m04/id_rsa Username:docker}
	I0210 13:02:43.573819  601950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:02:43.590882  601950 status.go:176] ha-781985-m04 status: &{Name:ha-781985-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 node start m02 -v=7 --alsologtostderr
E0210 13:03:17.504483  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-781985 node start m02 -v=7 --alsologtostderr: (57.836199065s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (58.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-781985 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-781985 -v=7 --alsologtostderr
E0210 13:05:33.643160  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:06:01.345893  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:07:13.585385  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-781985 -v=7 --alsologtostderr: (4m34.308536131s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-781985 --wait=true -v=7 --alsologtostderr
E0210 13:08:36.652455  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:10:33.642498  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-781985 --wait=true -v=7 --alsologtostderr: (2m48.827913779s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-781985
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-781985 node delete m03 -v=7 --alsologtostderr: (17.885698291s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 stop -v=7 --alsologtostderr
E0210 13:12:13.584573  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:15:33.642570  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-781985 stop -v=7 --alsologtostderr: (4m32.654193825s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr: exit status 7 (115.893158ms)

                                                
                                                
-- stdout --
	ha-781985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781985-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-781985-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:15:59.186382  606238 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:15:59.186561  606238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:15:59.186572  606238 out.go:358] Setting ErrFile to fd 2...
	I0210 13:15:59.186578  606238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:15:59.186807  606238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:15:59.187015  606238 out.go:352] Setting JSON to false
	I0210 13:15:59.187064  606238 mustload.go:65] Loading cluster: ha-781985
	I0210 13:15:59.187154  606238 notify.go:220] Checking for updates...
	I0210 13:15:59.187628  606238 config.go:182] Loaded profile config "ha-781985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:15:59.187658  606238 status.go:174] checking status of ha-781985 ...
	I0210 13:15:59.188265  606238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:15:59.188336  606238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:15:59.207376  606238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41249
	I0210 13:15:59.207927  606238 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:15:59.208642  606238 main.go:141] libmachine: Using API Version  1
	I0210 13:15:59.208665  606238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:15:59.209025  606238 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:15:59.209286  606238 main.go:141] libmachine: (ha-781985) Calling .GetState
	I0210 13:15:59.211036  606238 status.go:371] ha-781985 host status = "Stopped" (err=<nil>)
	I0210 13:15:59.211054  606238 status.go:384] host is not running, skipping remaining checks
	I0210 13:15:59.211062  606238 status.go:176] ha-781985 status: &{Name:ha-781985 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:15:59.211109  606238 status.go:174] checking status of ha-781985-m02 ...
	I0210 13:15:59.211562  606238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:15:59.211622  606238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:15:59.226963  606238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I0210 13:15:59.227505  606238 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:15:59.228052  606238 main.go:141] libmachine: Using API Version  1
	I0210 13:15:59.228077  606238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:15:59.228469  606238 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:15:59.228689  606238 main.go:141] libmachine: (ha-781985-m02) Calling .GetState
	I0210 13:15:59.230224  606238 status.go:371] ha-781985-m02 host status = "Stopped" (err=<nil>)
	I0210 13:15:59.230235  606238 status.go:384] host is not running, skipping remaining checks
	I0210 13:15:59.230240  606238 status.go:176] ha-781985-m02 status: &{Name:ha-781985-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:15:59.230267  606238 status.go:174] checking status of ha-781985-m04 ...
	I0210 13:15:59.230564  606238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:15:59.230628  606238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:15:59.245902  606238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0210 13:15:59.246355  606238 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:15:59.246849  606238 main.go:141] libmachine: Using API Version  1
	I0210 13:15:59.246872  606238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:15:59.247188  606238 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:15:59.247398  606238 main.go:141] libmachine: (ha-781985-m04) Calling .GetState
	I0210 13:15:59.248883  606238 status.go:371] ha-781985-m04 host status = "Stopped" (err=<nil>)
	I0210 13:15:59.248904  606238 status.go:384] host is not running, skipping remaining checks
	I0210 13:15:59.248909  606238 status.go:176] ha-781985-m04 status: &{Name:ha-781985-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-781985 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 13:16:56.707350  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:17:13.585209  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-781985 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.108414426s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-781985 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-781985 --control-plane -v=7 --alsologtostderr: (1m20.66643825s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-781985 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-031629 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-031629 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (50.200527459s)
--- PASS: TestJSONOutput/start/Command (50.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-031629 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-031629 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-031629 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-031629 --output=json --user=testUser: (7.349438058s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-676227 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-676227 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.8579ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ebd6dd42-be30-46b0-88d7-46c6c257de8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-676227] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5ae952d-f4b3-46af-a031-cee9a647a0b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20390"}}
	{"specversion":"1.0","id":"9c34fb11-d829-4404-8e94-45c4edd9653c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47648d97-810b-475d-9da6-22eac47045ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig"}}
	{"specversion":"1.0","id":"9eabcd25-3fea-4436-b9f9-cc82466e99ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube"}}
	{"specversion":"1.0","id":"d4a20ecf-f6a5-4c85-8f4f-06b4423105a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5f5532e4-06df-4930-a281-6312bd6c46c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d9de36d9-7964-4e94-b947-6be5aa370a44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-676227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-676227
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (97.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-251652 --driver=kvm2  --container-runtime=crio
E0210 13:20:33.642746  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-251652 --driver=kvm2  --container-runtime=crio: (46.055775907s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-267989 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-267989 --driver=kvm2  --container-runtime=crio: (48.880662677s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-251652
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-267989
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-267989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-267989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-267989: (1.014054919s)
helpers_test.go:175: Cleaning up "first-251652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-251652
--- PASS: TestMinikubeProfile (97.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-769429 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0210 13:22:13.584831  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-769429 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.727785321s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-769429 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-769429 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-788409 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-788409 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.634704045s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-769429 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-788409
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-788409: (1.305027283s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-788409
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-788409: (23.937132579s)
--- PASS: TestMountStart/serial/RestartStopped (24.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788409 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-149216 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-149216 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.597229227s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- rollout status deployment/busybox
E0210 13:25:16.655956  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-149216 -- rollout status deployment/busybox: (4.366176388s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-4lssp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-crvzr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-4lssp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-crvzr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-4lssp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-crvzr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-4lssp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-4lssp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-crvzr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-149216 -- exec busybox-58667487b6-crvzr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-149216 -v 3 --alsologtostderr
E0210 13:25:33.642722  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-149216 -v 3 --alsologtostderr: (50.366798545s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-149216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp testdata/cp-test.txt multinode-149216:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944738138/001/cp-test_multinode-149216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216:/home/docker/cp-test.txt multinode-149216-m02:/home/docker/cp-test_multinode-149216_multinode-149216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test_multinode-149216_multinode-149216-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216:/home/docker/cp-test.txt multinode-149216-m03:/home/docker/cp-test_multinode-149216_multinode-149216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test_multinode-149216_multinode-149216-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp testdata/cp-test.txt multinode-149216-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944738138/001/cp-test_multinode-149216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m02:/home/docker/cp-test.txt multinode-149216:/home/docker/cp-test_multinode-149216-m02_multinode-149216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test_multinode-149216-m02_multinode-149216.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m02:/home/docker/cp-test.txt multinode-149216-m03:/home/docker/cp-test_multinode-149216-m02_multinode-149216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test_multinode-149216-m02_multinode-149216-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp testdata/cp-test.txt multinode-149216-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944738138/001/cp-test_multinode-149216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m03:/home/docker/cp-test.txt multinode-149216:/home/docker/cp-test_multinode-149216-m03_multinode-149216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216 "sudo cat /home/docker/cp-test_multinode-149216-m03_multinode-149216.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 cp multinode-149216-m03:/home/docker/cp-test.txt multinode-149216-m02:/home/docker/cp-test_multinode-149216-m03_multinode-149216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 ssh -n multinode-149216-m02 "sudo cat /home/docker/cp-test_multinode-149216-m03_multinode-149216-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-149216 node stop m03: (1.559837539s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-149216 status: exit status 7 (438.132668ms)

                                                
                                                
-- stdout --
	multinode-149216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-149216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-149216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr: exit status 7 (432.455174ms)

                                                
                                                
-- stdout --
	multinode-149216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-149216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-149216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:26:21.688533  614358 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:26:21.688647  614358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:26:21.688655  614358 out.go:358] Setting ErrFile to fd 2...
	I0210 13:26:21.688659  614358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:26:21.688842  614358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:26:21.689013  614358 out.go:352] Setting JSON to false
	I0210 13:26:21.689046  614358 mustload.go:65] Loading cluster: multinode-149216
	I0210 13:26:21.689143  614358 notify.go:220] Checking for updates...
	I0210 13:26:21.689443  614358 config.go:182] Loaded profile config "multinode-149216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:26:21.689462  614358 status.go:174] checking status of multinode-149216 ...
	I0210 13:26:21.690038  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.690086  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.712363  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0210 13:26:21.712902  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.713507  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.713534  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.713847  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.714091  614358 main.go:141] libmachine: (multinode-149216) Calling .GetState
	I0210 13:26:21.715726  614358 status.go:371] multinode-149216 host status = "Running" (err=<nil>)
	I0210 13:26:21.715744  614358 host.go:66] Checking if "multinode-149216" exists ...
	I0210 13:26:21.716056  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.716100  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.732046  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0210 13:26:21.732547  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.733093  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.733112  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.733458  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.733618  614358 main.go:141] libmachine: (multinode-149216) Calling .GetIP
	I0210 13:26:21.736695  614358 main.go:141] libmachine: (multinode-149216) DBG | domain multinode-149216 has defined MAC address 52:54:00:d3:29:87 in network mk-multinode-149216
	I0210 13:26:21.737135  614358 main.go:141] libmachine: (multinode-149216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:29:87", ip: ""} in network mk-multinode-149216: {Iface:virbr1 ExpiryTime:2025-02-10 14:23:30 +0000 UTC Type:0 Mac:52:54:00:d3:29:87 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:multinode-149216 Clientid:01:52:54:00:d3:29:87}
	I0210 13:26:21.737159  614358 main.go:141] libmachine: (multinode-149216) DBG | domain multinode-149216 has defined IP address 192.168.39.45 and MAC address 52:54:00:d3:29:87 in network mk-multinode-149216
	I0210 13:26:21.737322  614358 host.go:66] Checking if "multinode-149216" exists ...
	I0210 13:26:21.737635  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.737677  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.753833  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0210 13:26:21.754350  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.754932  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.754956  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.755369  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.755591  614358 main.go:141] libmachine: (multinode-149216) Calling .DriverName
	I0210 13:26:21.755802  614358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:26:21.755838  614358 main.go:141] libmachine: (multinode-149216) Calling .GetSSHHostname
	I0210 13:26:21.758677  614358 main.go:141] libmachine: (multinode-149216) DBG | domain multinode-149216 has defined MAC address 52:54:00:d3:29:87 in network mk-multinode-149216
	I0210 13:26:21.759134  614358 main.go:141] libmachine: (multinode-149216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:29:87", ip: ""} in network mk-multinode-149216: {Iface:virbr1 ExpiryTime:2025-02-10 14:23:30 +0000 UTC Type:0 Mac:52:54:00:d3:29:87 Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:multinode-149216 Clientid:01:52:54:00:d3:29:87}
	I0210 13:26:21.759181  614358 main.go:141] libmachine: (multinode-149216) DBG | domain multinode-149216 has defined IP address 192.168.39.45 and MAC address 52:54:00:d3:29:87 in network mk-multinode-149216
	I0210 13:26:21.759244  614358 main.go:141] libmachine: (multinode-149216) Calling .GetSSHPort
	I0210 13:26:21.759420  614358 main.go:141] libmachine: (multinode-149216) Calling .GetSSHKeyPath
	I0210 13:26:21.759599  614358 main.go:141] libmachine: (multinode-149216) Calling .GetSSHUsername
	I0210 13:26:21.759739  614358 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/multinode-149216/id_rsa Username:docker}
	I0210 13:26:21.840837  614358 ssh_runner.go:195] Run: systemctl --version
	I0210 13:26:21.847502  614358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:21.862121  614358 kubeconfig.go:125] found "multinode-149216" server: "https://192.168.39.45:8443"
	I0210 13:26:21.862171  614358 api_server.go:166] Checking apiserver status ...
	I0210 13:26:21.862206  614358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:26:21.875846  614358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1066/cgroup
	W0210 13:26:21.886036  614358 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1066/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 13:26:21.886087  614358 ssh_runner.go:195] Run: ls
	I0210 13:26:21.890887  614358 api_server.go:253] Checking apiserver healthz at https://192.168.39.45:8443/healthz ...
	I0210 13:26:21.895642  614358 api_server.go:279] https://192.168.39.45:8443/healthz returned 200:
	ok
	I0210 13:26:21.895669  614358 status.go:463] multinode-149216 apiserver status = Running (err=<nil>)
	I0210 13:26:21.895679  614358 status.go:176] multinode-149216 status: &{Name:multinode-149216 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:26:21.895698  614358 status.go:174] checking status of multinode-149216-m02 ...
	I0210 13:26:21.895996  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.896033  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.911628  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0210 13:26:21.912089  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.912615  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.912643  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.912971  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.913139  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetState
	I0210 13:26:21.914467  614358 status.go:371] multinode-149216-m02 host status = "Running" (err=<nil>)
	I0210 13:26:21.914486  614358 host.go:66] Checking if "multinode-149216-m02" exists ...
	I0210 13:26:21.914786  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.914829  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.930183  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0210 13:26:21.930637  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.931196  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.931220  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.931538  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.931753  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetIP
	I0210 13:26:21.934684  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | domain multinode-149216-m02 has defined MAC address 52:54:00:a2:3e:b8 in network mk-multinode-149216
	I0210 13:26:21.935136  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:3e:b8", ip: ""} in network mk-multinode-149216: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:36 +0000 UTC Type:0 Mac:52:54:00:a2:3e:b8 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:multinode-149216-m02 Clientid:01:52:54:00:a2:3e:b8}
	I0210 13:26:21.935155  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | domain multinode-149216-m02 has defined IP address 192.168.39.52 and MAC address 52:54:00:a2:3e:b8 in network mk-multinode-149216
	I0210 13:26:21.935345  614358 host.go:66] Checking if "multinode-149216-m02" exists ...
	I0210 13:26:21.935664  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:21.935714  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:21.951482  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0210 13:26:21.951910  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:21.952477  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:21.952510  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:21.952843  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:21.953053  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .DriverName
	I0210 13:26:21.953282  614358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:26:21.953310  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetSSHHostname
	I0210 13:26:21.956067  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | domain multinode-149216-m02 has defined MAC address 52:54:00:a2:3e:b8 in network mk-multinode-149216
	I0210 13:26:21.956539  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:3e:b8", ip: ""} in network mk-multinode-149216: {Iface:virbr1 ExpiryTime:2025-02-10 14:24:36 +0000 UTC Type:0 Mac:52:54:00:a2:3e:b8 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:multinode-149216-m02 Clientid:01:52:54:00:a2:3e:b8}
	I0210 13:26:21.956578  614358 main.go:141] libmachine: (multinode-149216-m02) DBG | domain multinode-149216-m02 has defined IP address 192.168.39.52 and MAC address 52:54:00:a2:3e:b8 in network mk-multinode-149216
	I0210 13:26:21.956741  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetSSHPort
	I0210 13:26:21.956907  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetSSHKeyPath
	I0210 13:26:21.957084  614358 main.go:141] libmachine: (multinode-149216-m02) Calling .GetSSHUsername
	I0210 13:26:21.957209  614358 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20390-580861/.minikube/machines/multinode-149216-m02/id_rsa Username:docker}
	I0210 13:26:22.036000  614358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:26:22.050837  614358 status.go:176] multinode-149216-m02 status: &{Name:multinode-149216-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:26:22.050906  614358 status.go:174] checking status of multinode-149216-m03 ...
	I0210 13:26:22.051273  614358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:26:22.051355  614358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:26:22.067192  614358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0210 13:26:22.067671  614358 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:26:22.068175  614358 main.go:141] libmachine: Using API Version  1
	I0210 13:26:22.068201  614358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:26:22.068557  614358 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:26:22.068790  614358 main.go:141] libmachine: (multinode-149216-m03) Calling .GetState
	I0210 13:26:22.070274  614358 status.go:371] multinode-149216-m03 host status = "Stopped" (err=<nil>)
	I0210 13:26:22.070288  614358 status.go:384] host is not running, skipping remaining checks
	I0210 13:26:22.070293  614358 status.go:176] multinode-149216-m03 status: &{Name:multinode-149216-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-149216 node start m03 -v=7 --alsologtostderr: (40.584102626s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (347.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-149216
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-149216
E0210 13:27:13.590673  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-149216: (3m3.408961889s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-149216 --wait=true -v=8 --alsologtostderr
E0210 13:30:33.642327  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:32:13.584533  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-149216 --wait=true -v=8 --alsologtostderr: (2m43.926254427s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-149216
--- PASS: TestMultiNode/serial/RestartKeepsNodes (347.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-149216 node delete m03: (2.145137948s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 stop
E0210 13:33:36.711702  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:35:33.642711  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-149216 stop: (3m1.71028235s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-149216 status: exit status 7 (95.561176ms)

                                                
                                                
-- stdout --
	multinode-149216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-149216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr: exit status 7 (87.161451ms)

                                                
                                                
-- stdout --
	multinode-149216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-149216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:35:55.289454  617429 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:35:55.289554  617429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:35:55.289558  617429 out.go:358] Setting ErrFile to fd 2...
	I0210 13:35:55.289563  617429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:35:55.289724  617429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:35:55.289891  617429 out.go:352] Setting JSON to false
	I0210 13:35:55.289926  617429 mustload.go:65] Loading cluster: multinode-149216
	I0210 13:35:55.290021  617429 notify.go:220] Checking for updates...
	I0210 13:35:55.290302  617429 config.go:182] Loaded profile config "multinode-149216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:35:55.290323  617429 status.go:174] checking status of multinode-149216 ...
	I0210 13:35:55.290692  617429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:35:55.290731  617429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:35:55.306126  617429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0210 13:35:55.306570  617429 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:35:55.307170  617429 main.go:141] libmachine: Using API Version  1
	I0210 13:35:55.307192  617429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:35:55.307584  617429 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:35:55.307796  617429 main.go:141] libmachine: (multinode-149216) Calling .GetState
	I0210 13:35:55.309468  617429 status.go:371] multinode-149216 host status = "Stopped" (err=<nil>)
	I0210 13:35:55.309489  617429 status.go:384] host is not running, skipping remaining checks
	I0210 13:35:55.309496  617429 status.go:176] multinode-149216 status: &{Name:multinode-149216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:35:55.309542  617429 status.go:174] checking status of multinode-149216-m02 ...
	I0210 13:35:55.309982  617429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0210 13:35:55.310035  617429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0210 13:35:55.325668  617429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39411
	I0210 13:35:55.326105  617429 main.go:141] libmachine: () Calling .GetVersion
	I0210 13:35:55.326557  617429 main.go:141] libmachine: Using API Version  1
	I0210 13:35:55.326579  617429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0210 13:35:55.326932  617429 main.go:141] libmachine: () Calling .GetMachineName
	I0210 13:35:55.327134  617429 main.go:141] libmachine: (multinode-149216-m02) Calling .GetState
	I0210 13:35:55.328533  617429 status.go:371] multinode-149216-m02 host status = "Stopped" (err=<nil>)
	I0210 13:35:55.328545  617429 status.go:384] host is not running, skipping remaining checks
	I0210 13:35:55.328551  617429 status.go:176] multinode-149216-m02 status: &{Name:multinode-149216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-149216 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0210 13:37:13.584535  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-149216 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.652095202s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-149216 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-149216
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-149216-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-149216-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.524272ms)

                                                
                                                
-- stdout --
	* [multinode-149216-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-149216-m02' is duplicated with machine name 'multinode-149216-m02' in profile 'multinode-149216'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-149216-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-149216-m03 --driver=kvm2  --container-runtime=crio: (45.887348521s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-149216
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-149216: exit status 80 (217.736493ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-149216 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-149216-m03 already exists in multinode-149216-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-149216-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.03s)

                                                
                                    
x
+
TestScheduledStopUnix (115.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-961014 --memory=2048 --driver=kvm2  --container-runtime=crio
E0210 13:41:56.659817  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:42:13.584764  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-961014 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.759405732s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961014 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-961014 -n scheduled-stop-961014
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961014 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 13:42:25.220683  588140 retry.go:31] will retry after 54.476µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.221800  588140 retry.go:31] will retry after 220.249µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.222951  588140 retry.go:31] will retry after 329.462µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.224086  588140 retry.go:31] will retry after 479.732µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.225215  588140 retry.go:31] will retry after 419.44µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.226345  588140 retry.go:31] will retry after 648.396µs: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.227487  588140 retry.go:31] will retry after 1.617146ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.229706  588140 retry.go:31] will retry after 2.532572ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.232942  588140 retry.go:31] will retry after 2.738948ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.236123  588140 retry.go:31] will retry after 5.130657ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.242326  588140 retry.go:31] will retry after 7.50146ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.250519  588140 retry.go:31] will retry after 10.081559ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.260663  588140 retry.go:31] will retry after 10.050993ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.270821  588140 retry.go:31] will retry after 14.000259ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
I0210 13:42:25.285090  588140 retry.go:31] will retry after 39.149307ms: open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/scheduled-stop-961014/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961014 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961014 -n scheduled-stop-961014
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-961014
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961014 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-961014
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-961014: exit status 7 (77.100187ms)

                                                
                                                
-- stdout --
	scheduled-stop-961014
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961014 -n scheduled-stop-961014
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961014 -n scheduled-stop-961014: exit status 7 (66.676058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-961014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-961014
--- PASS: TestScheduledStopUnix (115.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (171.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.669062994 start -p running-upgrade-115286 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.669062994 start -p running-upgrade-115286 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m25.043977992s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-115286 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-115286 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.280516832s)
helpers_test.go:175: Cleaning up "running-upgrade-115286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-115286
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-115286: (1.371390352s)
--- PASS: TestRunningBinaryUpgrade (171.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (87.257358ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-013011] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013011 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013011 --driver=kvm2  --container-runtime=crio: (1m34.263364572s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-013011 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-020784 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-020784 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.421078ms)

                                                
                                                
-- stdout --
	* [false-020784] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:43:39.638683  621891 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:43:39.638786  621891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:43:39.638791  621891 out.go:358] Setting ErrFile to fd 2...
	I0210 13:43:39.638795  621891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:43:39.639000  621891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-580861/.minikube/bin
	I0210 13:43:39.639654  621891 out.go:352] Setting JSON to false
	I0210 13:43:39.640707  621891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12365,"bootTime":1739182655,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:43:39.640814  621891 start.go:139] virtualization: kvm guest
	I0210 13:43:39.642839  621891 out.go:177] * [false-020784] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:43:39.644133  621891 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:43:39.644158  621891 notify.go:220] Checking for updates...
	I0210 13:43:39.646533  621891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:43:39.647635  621891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-580861/kubeconfig
	I0210 13:43:39.648815  621891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-580861/.minikube
	I0210 13:43:39.650006  621891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:43:39.651154  621891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:43:39.652953  621891 config.go:182] Loaded profile config "NoKubernetes-013011": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:43:39.653134  621891 config.go:182] Loaded profile config "force-systemd-env-139209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:43:39.653273  621891 config.go:182] Loaded profile config "offline-crio-947434": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0210 13:43:39.653421  621891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:43:39.689632  621891 out.go:177] * Using the kvm2 driver based on user configuration
	I0210 13:43:39.690774  621891 start.go:297] selected driver: kvm2
	I0210 13:43:39.690786  621891 start.go:901] validating driver "kvm2" against <nil>
	I0210 13:43:39.690799  621891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:43:39.692661  621891 out.go:201] 
	W0210 13:43:39.693753  621891 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0210 13:43:39.694732  621891 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-020784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-020784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-020784"

                                                
                                                
----------------------- debugLogs end: false-020784 [took: 2.883529084s] --------------------------------
helpers_test.go:175: Cleaning up "false-020784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-020784
--- PASS: TestNetworkPlugins/group/false (3.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (68.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0210 13:45:33.642513  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m7.003591065s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-013011 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-013011 status -o json: exit status 2 (275.776265ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-013011","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-013011
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (68.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013011 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.202502897s)
--- PASS: TestNoKubernetes/serial/Start (48.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-013011 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-013011 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.171125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-013011
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-013011: (1.350149578s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-013011 --driver=kvm2  --container-runtime=crio
E0210 13:47:13.585433  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-013011 --driver=kvm2  --container-runtime=crio: (45.521964226s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-013011 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-013011 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.076697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3272281455 start -p stopped-upgrade-667631 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3272281455 start -p stopped-upgrade-667631 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.015851779s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3272281455 -p stopped-upgrade-667631 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3272281455 -p stopped-upgrade-667631 stop: (2.152622656s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-667631 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-667631 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.38616261s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.56s)

                                                
                                    
x
+
TestPause/serial/Start (59.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145767 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-145767 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.645502291s)
--- PASS: TestPause/serial/Start (59.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.727086222s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-667631
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0210 13:50:16.713834  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:50:33.643039  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m32.814468634s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-020784 "pgrep -a kubelet"
I0210 13:50:43.409949  588140 config.go:182] Loaded profile config "auto-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-klqx4" [069c5140-cbbc-43df-9089-7e4a7aa091d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-klqx4" [069c5140-cbbc-43df-9089-7e4a7aa091d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00473984s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.434729462s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tthgf" [0db0e7c0-cdf5-4c45-92b1-5846f8e0318e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004108576s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-020784 "pgrep -a kubelet"
I0210 13:51:25.174572  588140 config.go:182] Loaded profile config "kindnet-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5kt7t" [b4527fc1-364e-46ce-89de-26d93fc71e88] Pending
helpers_test.go:344: "netcat-5d86dc444-5kt7t" [b4527fc1-364e-46ce-89de-26d93fc71e88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004728686s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0210 13:52:13.585078  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.424745807s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g96mz" [fdde2de5-0597-425c-ada5-ec602eee158b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004963277s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-020784 "pgrep -a kubelet"
I0210 13:52:50.569178  588140 config.go:182] Loaded profile config "calico-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2v5jd" [ab52ef9f-a31f-4f6a-bc15-6d4bfee017bd] Pending
helpers_test.go:344: "netcat-5d86dc444-2v5jd" [ab52ef9f-a31f-4f6a-bc15-6d4bfee017bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2v5jd" [ab52ef9f-a31f-4f6a-bc15-6d4bfee017bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004855266s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m6.58542744s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-020784 "pgrep -a kubelet"
I0210 13:53:16.441385  588140 config.go:182] Loaded profile config "custom-flannel-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pzq76" [b2e635df-1f94-4d3f-a5f5-87f0b9c52dee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pzq76" [b2e635df-1f94-4d3f-a5f5-87f0b9c52dee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005111309s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.837288875s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-020784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m2.612089866s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-020784 "pgrep -a kubelet"
I0210 13:53:59.910848  588140 config.go:182] Loaded profile config "enable-default-cni-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w2kcc" [c7f53076-f381-48f8-93c6-20a893b8d3d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w2kcc" [c7f53076-f381-48f8-93c6-20a893b8d3d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007924242s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sm7tq" [fa985948-44a8-4628-a2a7-ad8ca9b0e454] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005095463s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-020784 "pgrep -a kubelet"
I0210 13:54:41.802347  588140 config.go:182] Loaded profile config "flannel-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hnsrz" [20260a7c-45de-4dcd-84eb-8b379347fb9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hnsrz" [20260a7c-45de-4dcd-84eb-8b379347fb9e] Running
I0210 13:54:47.215395  588140 config.go:182] Loaded profile config "bridge-020784": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005240276s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-020784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-020784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lvbqh" [1b0a5a06-7106-4b67-bdfd-fac324eaae0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lvbqh" [1b0a5a06-7106-4b67-bdfd-fac324eaae0d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004794189s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-020784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-020784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0210 14:04:27.867836  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-264648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-264648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m15.604129073s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-963165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:55:33.642988  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.815642  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.822006  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.833369  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.854739  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.896142  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:43.977656  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:44.139220  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:44.461315  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:45.103460  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:46.385562  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:48.947837  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:55:54.069604  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:04.311980  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:18.943536  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:18.949927  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:18.961388  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:18.982802  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:19.024346  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:19.106300  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:19.267922  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:19.590180  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:20.232665  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:21.514812  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:24.076717  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:56:24.793449  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-963165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m16.125013778s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-264648 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ece7dc9a-9f8a-44aa-aedc-78da6abd89c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0210 13:56:29.198761  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [ece7dc9a-9f8a-44aa-aedc-78da6abd89c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003568569s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-264648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-963165 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7fa921c2-bce1-437b-aa51-aea4273f71bd] Pending
helpers_test.go:344: "busybox" [7fa921c2-bce1-437b-aa51-aea4273f71bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7fa921c2-bce1-437b-aa51-aea4273f71bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003541848s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-963165 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-264648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-264648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-264648 --alsologtostderr -v=3
E0210 13:56:39.440352  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-264648 --alsologtostderr -v=3: (1m31.01111799s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-963165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-963165 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-963165 --alsologtostderr -v=3
E0210 13:56:59.922730  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:05.754991  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:13.584886  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:40.884842  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.328443  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.334922  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.346298  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.367749  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.409210  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.490731  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.652361  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:44.974063  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:45.615822  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:46.897530  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:49.459408  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:57:54.581676  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:04.823218  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-963165 --alsologtostderr -v=3: (1m31.473898175s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264648 -n no-preload-264648
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264648 -n no-preload-264648: exit status 7 (76.767665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-264648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-264648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-264648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m48.919820129s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-264648 -n no-preload-264648
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-963165 -n embed-certs-963165
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-963165 -n embed-certs-963165: exit status 7 (77.19785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-963165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (319.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-963165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 13:58:16.656471  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.662986  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.674418  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.695886  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.737312  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.818806  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:16.980391  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:17.302365  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:17.944613  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:19.226886  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:21.789388  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:25.305371  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:26.910751  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:27.677031  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/auto-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:36.661986  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:37.152092  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:58:57.634165  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.165432  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.171795  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.183242  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.204722  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.246171  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.327803  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.489384  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:00.810980  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:01.452428  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:02.733932  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:59:02.806581  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-963165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m18.982328087s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-963165 -n embed-certs-963165
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (319.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-643105 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-643105 --alsologtostderr -v=3: (2.571127928s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-643105 -n old-k8s-version-643105: exit status 7 (77.019584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-643105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-991097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m3.001684852s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mwgjw" [9c4588c0-e725-42d1-b7b9-105cd2474da1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005712305s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mwgjw" [9c4588c0-e725-42d1-b7b9-105cd2474da1] Running
E0210 14:03:44.359684  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005473603s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-963165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-963165 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-963165 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-963165 -n embed-certs-963165
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-963165 -n embed-certs-963165: exit status 2 (253.12662ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-963165 -n embed-certs-963165
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-963165 -n embed-certs-963165: exit status 2 (248.928184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-963165 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-963165 -n embed-certs-963165
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-963165 -n embed-certs-963165
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-187291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-187291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m0.041714279s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4pt8t" [4c5c34cc-1c41-4d69-a70a-102c5d872635] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0210 14:04:00.165375  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4pt8t" [4c5c34cc-1c41-4d69-a70a-102c5d872635] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004375583s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4pt8t" [4c5c34cc-1c41-4d69-a70a-102c5d872635] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00451876s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-264648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-264648 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-264648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264648 -n no-preload-264648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264648 -n no-preload-264648: exit status 2 (239.736604ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264648 -n no-preload-264648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264648 -n no-preload-264648: exit status 2 (247.256919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-264648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-264648 -n no-preload-264648
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-264648 -n no-preload-264648
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-991097 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [604aadc9-715d-46e3-9122-4ab7397eca1f] Pending
E0210 14:04:35.565420  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [604aadc9-715d-46e3-9122-4ab7397eca1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [604aadc9-715d-46e3-9122-4ab7397eca1f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003968005s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-991097 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-991097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-991097 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-991097 --alsologtostderr -v=3
E0210 14:04:47.442704  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-991097 --alsologtostderr -v=3: (1m31.045134109s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-187291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-187291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.485854649s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-187291 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-187291 --alsologtostderr -v=3: (10.614863836s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-187291 -n newest-cni-187291
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-187291 -n newest-cni-187291: exit status 7 (68.420879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-187291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-187291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 14:05:03.269420  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:05:15.144392  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/bridge-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:05:33.642590  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-187291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (37.596726161s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-187291 -n newest-cni-187291
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-187291 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-187291 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-187291 -n newest-cni-187291
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-187291 -n newest-cni-187291: exit status 2 (232.705814ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-187291 -n newest-cni-187291
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-187291 -n newest-cni-187291: exit status 2 (244.063208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-187291 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-187291 -n newest-cni-187291
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-187291 -n newest-cni-187291
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097: exit status 7 (68.43873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-991097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (374.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-991097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0210 14:06:18.943039  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/kindnet-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:27.874698  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:27.881173  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:27.892586  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:27.914071  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:27.955639  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:28.037201  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:28.198863  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:28.520692  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:29.162796  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:30.444883  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:33.006811  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:38.128803  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:48.370999  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:06:56.715459  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/functional-729385/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:07:08.853164  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:07:13.585458  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/addons-692802/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:07:44.327778  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/calico-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:07:49.814521  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/no-preload-264648/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:08:16.656291  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/custom-flannel-020784/client.crt: no such file or directory" logger="UnhandledError"
E0210 14:09:00.165609  588140 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-580861/.minikube/profiles/enable-default-cni-020784/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-991097 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (6m14.593215143s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (374.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pnbfg" [6c2472ea-734f-43d1-af67-ba0d41e9afa5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pnbfg" [6c2472ea-734f-43d1-af67-ba0d41e9afa5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.00439788s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pnbfg" [6c2472ea-734f-43d1-af67-ba0d41e9afa5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003438198s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-991097 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-991097 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-991097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097: exit status 2 (240.282909ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097: exit status 2 (239.846359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-991097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-991097 -n default-k8s-diff-port-991097
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.07
266 TestNetworkPlugins/group/cilium 3.36
281 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-692802 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-020784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-020784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-020784"

                                                
                                                
----------------------- debugLogs end: kubenet-020784 [took: 2.923546554s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-020784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-020784
--- SKIP: TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-020784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-020784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-020784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-020784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-020784"

                                                
                                                
----------------------- debugLogs end: cilium-020784 [took: 3.219349615s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-020784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-020784
--- SKIP: TestNetworkPlugins/group/cilium (3.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-372614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-372614
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard